I am trying to create an reporting output file where I am listing all the buckets of different GCP projects.
The challenge is neither it prints out the output on the stdout nor it creates the file.
I can confirm that one bucket is there in the project for testing purposes.
Here is the code:
main.tf
data "google_client_config" "default" {}
resource "null_resource" "list_all_buckets" {
triggers = {
filename = "${path.module}/storage_output.csv"
}
provisioner "local-exec" {
command ="bash list_buckets.sh ${data.google_client_config.default.access_token} '*<gcp project name in single quote>*' >> storage_ouput.csv"
working_dir = "${path.module}"
}
}
data "local_file" "test" {
filename = "${null_resource.list_all_buckets.triggers.filename}"
}
output "result" {
value = "${data.local_file.test.content}"
}
list_buckets.sh
#!/bin/bash
readonly TOKEN="$1"
readonly PROJECT="$2"
readonly URL="https://storage.googleapis.com/storage/v1/b?project=${2}"
LIST_BUCKETS="$(curl -X GET -H "Authorization: Bearer "${TOKEN} ${URL})"
Please feel free to ask any questions.
I am expecting a storage_ouput.csv file to be created in the existing module directory.
Output Truncated
You can apply this plan to save these new output values to the Terraform
state, without changing any real infrastructure.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
result = ""
Cleaning up project directory and file based variables
00:00
Job succeeded
provisioners in Terraform only run on creation or deletion time (defaulted to on create), which means that your script will be only called in the first time your run the plan. That run should create a storage_ouput.csv file (which has a typo, unlike the path in your triggers storage_output.csv).
For testing purposes only I recreated the following:
resource "null_resource" "list_all_buckets" {
triggers = {
filename = "${path.module}/storage_output.csv"
}
provisioner "local-exec" {
command = "date >> ${path.module}/storage_output.csv"
working_dir = path.module
}
}
data "local_file" "test" {
filename = null_resource.list_all_buckets.triggers.filename
}
output "result" {
value = data.local_file.test.content
}
which runs date command and writes it to storage_output.csv. When running this for the first time it creates the CSV file on the current directory, but if I run it again nothing will happen since the null_resource was already created.
To force-recreating this I would need to taint it, or I run:
terraform apply --replace null_resource.list_all_buckets
Which force recreating null_resource, thus executing the script in your provisioned.
As Hashicorp mention in their documentation:
Use provisioners as a last resort. There are better alternatives for most situations.
You may want to have a look at Google Storage's data sources like:
google_storage_bucket
Or, if you created these buckets with Terraform you can list your resources with show or state list. And if they are created in different workspaces you could pull them using terraform_remote_state
Related
I use Terraform to manage resources of Google Cloud Functions. But while the inital deployment of the cloud function worked, further deploments with changed cloud function source code (the source archive sourcecode.zip) were not redeployed when I use terraform apply after updating the source archive.
The storage bucket object gets updated but this does not trigger an update/redeployment of the cloud function resource.
Is this an error of the provider?
Is there a way to redeploy a function in terraform when the code changes?
The simplified source code I am using:
resource "google_storage_bucket" "cloud_function_source_bucket" {
name = "${local.project}-function-bucket"
location = local.region
uniform_bucket_level_access = true
}
resource "google_storage_bucket_object" "function_source_archive" {
name = "sourcecode.zip"
bucket = google_storage_bucket.cloud_function_source_bucket.name
source = "./../../../sourcecode.zip"
}
resource "google_cloudfunctions_function" "test_function" {
name = "test_func"
runtime = "python39"
region = local.region
project = local.project
available_memory_mb = 256
source_archive_bucket = google_storage_bucket.cloud_function_source_bucket.name
source_archive_object = google_storage_bucket_object.function_source_archive.name
trigger_http = true
entry_point = "trigger_endpoint"
service_account_email = google_service_account.function_service_account.email
vpc_connector = "projects/${local.project}/locations/${local.region}/connectors/serverless-main"
vpc_connector_egress_settings = "ALL_TRAFFIC"
ingress_settings = "ALLOW_ALL"
}
You can append MD5 or SHA256 checksum of the content of zip to the bucket object's name. That will trigger recreation of cloud function whenever source code changes.
${data.archive_file.function_src.output_md5}
data "archive_file" "function_src" {
type = "zip"
source_dir = "SOURCECODE_PATH/sourcecode"
output_path = "./SAVING/PATH/sourcecode.zip"
}
resource "google_storage_bucket_object" "function_source_archive" {
name = "sourcecode.${data.archive_file.function_src.output_md5}.zip"
bucket = google_storage_bucket.cloud_function_source_bucket.name
source = data.archive_file.function_src.output_path
}
You can read more about terraform archive here - terraform archive_file
You might consider that as a defect. Personally, I am not so sure about it.
Terraform has some logic, when an "apply" command is executed.
The question to think about - how does terraform know that the source code of the cloud function is changed, and the cloud function is to be redeployed? Terraform does not "read" the cloud function source code, does not compare it with the previous version. It only reads the terraform's script files. And if nothing is changed in those files (in comparison to the state file, and resources existed in GCP projects) - nothing to be redeployed.
Therefore, something is to be changed. For example the name of the archive file. In that case, terraform finds out that the cloud function has to be redeployed (because the state file has the old name of the archive object). The cloud function is redeployed.
An example of that code with more detailed explanation, was provided some time ago: don't take into account the question working - just read the answer
I need to upload a folder to S3 Bucket. But when I apply for the first time. It just uploads. But I have two problems here:
uploaded version outputs as null. I would expect some version_id like 1, 2, 3
When running terraform apply again, it says Apply complete! Resources: 0 added, 0 changed, 0 destroyed. I would expect to upload all the times when I run terraform apply and create a new version.
What am I doing wrong? Here is my Terraform config:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my_bucket_name"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "my_files.zip"
}
output "my_bucket_file_version" {
value = "${aws_s3_bucket_object.file_upload.version_id}"
}
Terraform only makes changes to the remote objects when it detects a difference between the configuration and the remote object attributes. In the configuration as you've written it so far, the configuration includes only the filename. It includes nothing about the content of the file, so Terraform can't react to the file changing.
To make subsequent changes, there are a few options:
You could use a different local filename for each new version.
You could use a different remote object path for each new version.
You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.
The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "${path.module}/my_files.zip"
etag = "${filemd5("${path.module}/my_files.zip")}"
}
With that extra argument in place, Terraform will detect when the MD5 hash of the file on disk is different than that stored remotely in S3 and will plan to update the object accordingly.
(I'm not sure what's going on with version_id. It should work as long as versioning is enabled on the bucket.)
The preferred solution is now to use the source_hash property. Note that aws_s3_bucket_object has been replaced by aws_s3_object.
locals {
object_source = "${path.module}/my_files.zip"
}
resource "aws_s3_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = local.object_source
source_hash = filemd5(local.object_source)
}
Note that etag can have issues when encryption is used.
You shouldn't be using Terraform to do this. Terraform is supposed to orchestrate and provision your infrastructure and its configuration, not files. That said, terraform is not aware of changes on your files. Unless you change their names, terraform will not update the state.
Also, it is better to use local-exec to do that. Something like:
resource "aws_s3_bucket" "my-bucket" {
# ...
provisioner "local-exec" {
command = "aws s3 cp path_to_my_file ${aws_s3_bucket.my-bucket.id}"
}
}
I am writing a small script that takes a small file from my local machine and puts it into an AWS S3 bucket.
My terraform.tf:
provider "aws" {
region = "us-east-1"
version = "~> 1.6"
}
terraform {
backend "s3" {
bucket = "${var.bucket_testing}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
key = "testexport/exportFile.tfstate"
region = "us-east-1"
encrypt = true
}
}
data "aws_s3_bucket" "pr-ip" {
bucket = "${var.bucket_testing}"
}
resource "aws_s3_bucket_object" "put_file" {
bucket = "${data.aws_s3_bucket.pr-ip.id}"
key = "${var.file_path}/${var.file_name}"
source = "src/Datafile.txt"
etag = "${md5(file("src/Datafile.txt"))}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
server_side_encryption = "aws:kms"
}
However, when I init:
terraform init
#=>
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working with Terraform immediately by creating Terraform configuration files.
and then try to apply:
terraform apply
#=>
Error: No configuration files found!
Apply requires configuration to be present. Applying without a configuration would mark everything for destruction, which is normally not what is desired. If you would like to destroy everything, please run 'terraform destroy' instead which does not require any configuration files.
I get the error above. Also, I have setup my default AWS Access Key ID and value.
What can I do?
This error means that you have run the command in the wrong place. You have to be in the directory that contains your configuration files, so before running init or apply you have to cd to your Terraform project folder.
Error: No configuration files found!
The above error arises when you are not present in the folder, which contains your configuration file.
To remediate the situation you can create a .tf in your project folder you will be working.
Note - An empty .tf will also eliminate the error, but will be of limited use as it does not contain provider info.
See the example below:-
provider "aws" {
region = "us-east" #Below value will be asked when the terraform apply command is executed if not provided here
}
So, In order for the successful execution of the terraform apply command you need to make sure the below points:-
You need to be present in your terraform project folder (Can be any directory).
Must contain .tf preferably should contain terraform provider info.
Execute terraform init to initialize the backend & provider plugin.
you are now good to execute terraform apply (without any no config error)
In case any one comes across this now, I ran into an issue where my TF_WORSPACE env var was set to a different workspace than the directory I was in. Double check your workspace with
terraform workspace show
to show your available workspaces
terraform workspace list
to use one of the listed workspaces:
terraform workspace select <workspace name>
If the TF_WORKSPACE env var is set when you try to use terraform workspace select TF will print a message telling you of the potential issue:
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.
I had the same error emulated by you, In my case it was not a VPN error but incorrect file
system naming. I was in the project folder.To remedy the situation, i created a .tf file
with vim editor with the command vi aws.tf, then populated the file with defined variables. Mine is working.
See my attached images
I too had the same issue, remember terraform filename should end with .tf as extension
Another possible reason could be if you are using modules where the URL is incorrect.
When I had:
source = "git::ssh://git#git.companyname.com/observability.git//modules/ec2?ref=v2.0.0"
instead of:
source = "git::ssh://git#git.companyname.com/observability.git//terraform/modules/ec2?ref=v2.0.0"
I was seeing the same error message as you.
I got this error this morning when deploying to production, on a project which has been around for years and nothing had changed. We finally traced it down to the person who created the production deploy ticket had pasted this command into an email using Outlook:
terraform init --reconfigure
Microsoft, in its infinite wisdom, combined the two hyphens into one and the one hyphen wasn't even the standard ASCII hyphen character (I think it's called an "en-dash"):
terraform init –reconfigure
This caused Terraform 0.12.31 to give the helpful error message:
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
It took us half an hour and another pair of eyes to notice that the hyphens were incorrect and needed to be re-typed! (I think terraform thought "reconfigure" was the name of the directory we wanted to run the init in, which of course didn't exist. Perhaps terraform could be improved to name the directory it's looking in when it reports this error?)
Thanks Microsoft for always being helpful (not)!
I have used Terragrunt to orchestrate the creation of a non-default AWS VPC.
I've got S3/DynamoDB state mgmt, and the VPC code is a module. I have the 'VPC environment' terraform.tfvars code checked into a second repo as per the terragrunt README.md.
I created a second module which will eventually create hosts in this VPC but for now just aims to output its ID. I have created a separate 'hosts environment' / terraform.tfvars for the instantiation of this module.
I run terragrunt apply in the VPC environment directory - VPC created
I run terragrunt apply a second time in the hosts environment directory - output directive doesn't work (no error, but incorrect, see below).
This is a precursor to one day running a terragrunt apply-all in the parent directory of the VPC/hosts environment directories; my reading of the docs suggest using a terraform_remote_state data source to expose the VPC ID, so I specified access like this in the data.tf file of the hosts module:
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
bucket = "myBucket"
key = "keyToMy/vpcEnvironment.tfstate"
region = "stateRegion"
}
}
Then, in the hosts module outputs.tf, I specified an output to check assignment:
output "mon_vpc" {
value = "${data.terraform_remote_state.vpc.id}"
}
When I run (2) above, it exits with:
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
mon_vpc = 2018-06-02 23:14:42.958848954 +0000 UTC
Questions:
I'm going wrong setting up the code so that the hosts environment is configured to correctly acquire the VPC ID from the already-existing VPC (terraform state file) - any advice on what to change here would be appreciated.
It does look like I've managed to acquire the date of when the VPC was created rather than its ID, which given the code is perplexing - anyone know why?
I'm not using community modules - all hand rolled.
EDIT: In response to Brandon Miller, here is a bit more. In my VPC module, I have an outputs.tf containing among other outputs:
output "aws_vpc.mv.id-op" {
value = "${aws_vpc.mv.id}"
}
and the vpc.tf contains
resource "aws_vpc" "mv" {
cidr_block = "${var.vpcCidr}"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "mv-vpc-${var.aws_region}"
}
}
As this cfg results in a vpc being created, and as most of the parameters are <computed>, I assumed state would contain sufficient data for other modules to refer to by consulting state (I assumed at first that terraform used the AWS API for this under the bonnet, rather than consulting a different state key).
EDIT 2: Read all of #brendan-miller's answer and following comments first.
Use of periods causes a problem as it confuses terraform (see Brendan's answer for the specification format below):
Error: output 'mon_vpc': unknown resource 'data.aws_vpc.mv-ds' referenced in variable data.aws_vpc.mv-ds.vpc.id
You named your output aws_vpc.mv.id-op but when you retrieve it you are retrieving just id. You could try
data.terraform_remote_state.vpc.aws_vpc.mv.id
but im not sure if Terraform will complain about the additional .. However the format should always be
data.terraform_remote_state.<name of the remote state module>.<name of the output>
You mentioned wanting to be able to get this info with the AWS API. That is also possible by using the aws_vpc data source. Their example uses id, but you can also use any tag you used on your vpc.
Like this:
data "aws_vpc" "default" {
filter {
name = "tag:Name"
values = ["example-vpc-name"]
}
}
Then you can use this for the id
${data.aws_vpc.default.id}
In addition this retrieves all tags set, for example:
${data.aws_vpc.default.tags.Name}
And the cidr block
${data.aws_vpc.default.cidr_block}
As well as some other info. This can be very useful for storing and retrieving things about your VPC.
I am trying to setup Terraform to deploy into various AWS regions, I am sending the Region on the command line since its one of the few changing values in the script. I am using -var "region=east" - for example, and it works in other modules by assigning the region correctly, the main.tf where I am using this value is set correctly.
In creating a resource there is a lookup which uses region as mapped value, amis is a map of regions and different ami values I only want the ami that matches for this resource. I tried two methods, using the command line interpolation:
ami = "${lookup(var.amis, $${var.region})}"
or as an intermediate value
variables "map_region" {
default = "${var.region}"
}
ami = "${lookup(var.amis, var.map_region)}"
Both give me syntax errors, though in looking through some of the documents with lookup I don't see where terraform supports this. Anyone else tried this successfully in some manner, or know a better way to pull out a value from a map using a command line variable?
EDIT:
Part of the problem was hidden because I was using a Bash script to run the Terraform modules. This was running a destroy -force, then apply. Because I was considering the command line variables as part of the build I did not add them to the destroy command, which is where they were being requested and giving me a prompt to enter them. Once I added in the -var commands to the destroy command as well as apply this all worked.
ami = "${lookup(var.amis, $${var.region})}"
is wrong because $$ is only valid for interpolated variables within inline templates.
variables "map_region" {
default = "${var.region}"
}
ami = "${lookup(var.amis, var.map_region)}"
does not work because you are passing a map as a key to lookup.
Terraform console is a useful tool for trying out interpolated expressions. Suppose the variables are defined as follows:
$ cat vars.tf
variable "amis" {
type = "map"
default {
"us-east-2" = "ami-58f5db3d"
"us-east-1" = "ami-fad25980"
}
}
variable "region" {}
Fire up the console passing a value to region to emulate what you would be doing with plan, apply etc:
$ terraform console -var "region=us-east-1"
> var.region
us-east-1
> lookup(var.amis, var.region)
ami-fad25980
Hope this helps.