Files are not archived in terraform before uploaded to GCP - google-cloud-platform

Despite using depends_on directive, it looks like zip is not created before trying to put it in the bucket. Considering pipeline output, somehow it just omits archiving the file before firing upload to bucket. Both files (index.js and package.json) exists.
resource "google_storage_bucket" "cloud-functions" {
project = var.project-1-id
name = "${var.project-1-id}-cloud-functions"
location = var.project-1-region
}
resource "google_storage_bucket_object" "start_instance" {
name = "start_instance.zip"
bucket = google_storage_bucket.cloud-functions.name
source = "${path.module}/start_instance.zip"
depends_on = [
data.archive_file.start_instance,
]
}
data "archive_file" "start_instance" {
type = "zip"
output_path = "${path.module}/start_instance.zip"
source {
content = file("${path.module}/scripts/start_instance/index.js")
filename = "index.js"
}
source {
content = file("${path.module}/scripts/start_instance/package.json")
filename = "package.json"
}
}
Terraform has been successfully initialized!
$ terraform apply -input=false "planfile"
google_storage_bucket_object.stop_instance: Creating...
google_storage_bucket_object.start_instance: Creating...
Error: open ./start_instance.zip: no such file or directory
on cloud_functions.tf line 41, in resource "google_storage_bucket_object" "start_instance":
41: resource "google_storage_bucket_object" "start_instance" {
LOGS:
2020-11-18T13:02:56.796Z [DEBUG] plugin.terraform-provider-google_v3.40.0_x5: 2020/11/18 13:02:56 [WARN] Failed to read source file "./start_instance.zip". Cannot compute md5 hash for it.
2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.stop_instance, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)
2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.start_instance, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)

I have exactly the same issue with GitLab CI/CD pipeline. After some digging, according to the discussion I found out that with this setup, the plan and apply stages are run in separate containers, and the archiving step is executed in the plan stage.
A workaround is to create a dummy trigger with null_resource and force the archive_file to depend on it, and, hence, to be executed in the apply stage.
resource null_resource dummy_trigger {
triggers = {
timestamp = timestamp()
}
}
resource "google_storage_bucket" "cloud-functions" {
project = var.project-1-id
name = "${var.project-1-id}-cloud-functions"
location = var.project-1-region
}
resource "google_storage_bucket_object" "start_instance" {
name = "start_instance.zip"
bucket = google_storage_bucket.cloud-functions.name
source = "${path.module}/start_instance.zip"
depends_on = [
data.archive_file.start_instance,
]
}
data "archive_file" "start_instance" {
type = "zip"
output_path = "${path.module}/start_instance.zip"
source {
content = file("${path.module}/scripts/start_instance/index.js")
filename = "index.js"
}
source {
content = file("${path.module}/scripts/start_instance/package.json")
filename = "package.json"
}
depends_on = [
resource.null_resource.dummy_trigger,
]
}

Related

How to get a relative path of terragrunt.hcl in an AWS resource's tag?

How can I tag each AWS resource with a relative path of its source/terragrunt.hcl file? Ideally, the solution, if it exists, would also work with a locally/relatively referenced modules (rather than only modules from a git repo).
# In root terragrunt.hcl
locals {
# ...
aws_default_tags = jsonencode({ # This is line 48 in the error below.
ManagedBy = "Terraform"
TerraformBasePath = path.cwd # What is a (working) equivalent of this?
)}
}
generate "provider" {
# ...
contents = <<EOF
provider "aws" {
# ...
default_tags {
tags = jsondecode(<<INNEREOF
${local.aws_default_tags}
INNEREOF
)
}
}
EOF
}
The error on terragrunt apply, with the root terragrunt.hcl as above:
> terragrunt apply
ERRO[0000] Not all locals could be evaluated:
ERRO[0000] - aws_default_tags [REASON: Can't evaluate expression at
/project/terragrunt.hcl:48,22-60,5:
you can only reference other local variables here,
but it looks like you're referencing something else (path is not defined)]
ERRO[0000] Could not evaluate all locals in block.
ERRO[0000] Unable to determine underlying exit code, so Terragrunt
will exit with error code 1
Got the relative path added as a tag, by simplifying the 1st snippet in the question:
# In root terragrunt.hcl
locals {
# ...
# terraform-git-repo = "infrastructure" # For future use in CD pipeline.
terraform-git-repo = "/local/path/infra"
}
generate "provider" {
# ...
contents = <<EOF
provider "aws" {
# ...
default_tags {
tags = {
Terraform-base-path = replace(replace(path.cwd, "${local.terraform-git-repo}", ""), "/.terragrunt-cache/.*/", "")
}
}
}
EOF
}
The nested replace functions could use some simplification.

How to upload files to S3 bucket using mime type

I trying to figure out using AWS CLI how to upload only certain files to S3 bucket based on the mime type.
Currently I am using terraform script to do that.
locals {
mime_types = jsondecode(file("${path.module}/mime.json"))
}
resource "aws_s3_object" "frontend_bucket_objects" {
for_each = fileset("${var.build_folder}", "**")
bucket = "frontend-bucket"
key = each.value
source = "${var.build_folder}\\${each.value}"
content_type = lookup(local.mime_types, regex("\\.[^.]+$", "${each.value}"), null)
etag = filemd5("${var.build_folder}\\${each.value}")
}
mime.json
{
".aac": "audio/aac",
".abw": "application/x-abiword",
".arc": "application/x-freearc",
".avif": "image/avif",
".avi": "video/x-msvideo",
".azw": "application/vnd.amazon.ebook",
".bin": "application/octet-stream",
".bmp": "image/bmp",
".css": "text/css",
".csv": "text/csv",
".doc": "application/msword",
..
.. etc etc
}
I want to upload using aws-cli but not able to figure out how to include files based on mime.
This is what I have right now which uploads entire source folder.
`aws s3 cp build/ s3://frontend-bucket --recursive`

My terraform state remain empty on dns_config.cluster_dns_scope using google_container_cluster

Terraform: v1.1.7
Provider: hasicorp/google v4.12.0
I try to spawn a GKE cluster with a specific dns configuration:
resource "google_container_cluster" "primary" {
name = local.cluster_name
location = local.region
remove_default_node_pool = true
initial_node_count = 1
network = module.gke_vpc.network_name
subnetwork = module.gke_vpc.subnetwork_name
project = local.project
dns_config {
cluster_dns = "CLOUD_DNS"
cluster_dns_scope = "VPC_SCOPE"
}
}
TF is able to spawn it, but in its state, the dns_config.cluster_dns_scope remains empty, meaning that when I do a terraform plan I always get a change planed:
cluster_dns_scope : "" -> "VPC_SCOPE"
I tried the different values for cluster_dns_scope:
DNS_SCOPE_UNSPECIFIED
CLUSTER_SCOPE
VPC_SCOPE
But I always get the same result.
I could modify my state as a workarround but the idea is to re-use the same TF module for several projects, so it's not what I want to do.
Any idea?
Not sure if this is the correct way to go about it but I saw mentioned in some docs that dns_config was GA; in a terraform issue someone thought this was being promoted from beta. So, I added the google-beta provider to see if it was:
% cat provider.tf
# Check Releases here: version numbers tend to match, and should
# https://github.com/hashicorp/terraform-provider-google/releases
# https://github.com/hashicorp/terraform-provider-google-beta/releases
terraform {
required_version = "~> 1.2.9"
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.3.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "~> 4.3.0"
}
}
}
provider "google" {
region = var.region
project = var.project_id
}
provider "google-beta" {
region = var.region
project = var.project_id
}
Then specified the provider in the GKE block:
% cat gke.tf
resource "google_container_cluster" "primary" {
provider = google-beta
name = var.cluster_apps
location = var.region
enable_autopilot = true
network = google_compute_network.vpc.name
subnetwork = google_compute_subnetwork.subnet.name
dns_config {
cluster_dns = "CLOUD_DNS"
cluster_dns_scope = "VPC_SCOPE"
cluster_dns_domain = var.dns_name
}
}
After that, it passed:
% tf validate
Success! The configuration is valid.
This is pretty late-breaking stuff so it might be in the latest version of Terraform:
% tf version
Terraform v1.2.9
on darwin_arm64
+ provider registry.terraform.io/hashicorp/google v4.3.0
+ provider registry.terraform.io/hashicorp/google-beta v4.3.0
Your version of Terraform is out of date! The latest version
is 1.3.0. You can update by downloading from <- could upgrade
At least Terraform is happy for now; let's see if GCP accepts it :-P

Terraform 14 template_file and null_resource issue

I'm trying to use null resource using local-exec-provisioner for enabling the s3 bucket logging on load balancer using the template file. Both of the terraform file and template file (lb-to-s3-log.tpl) are on same directory "/modules/lb-to-s3-log" however getting an error. Terraform file looks this way:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes ${data.template_file.lb-to-s3-log.rendered}"
}
}
WHERE:
var.INFO1 = test1
var.INFO2 = test2
var.INFO3 = test3
AND TEMPLATE (TPL) FILE CONTAINS:
{
"AccessLog": {
"Enabled": true,
"S3BucketName": "${X_INFO1}-${X_INFO2}-${X_INFO3}-logs",
"EmitInterval": 5,
"S3BucketPrefix": "${X_INFO1}-${X_INFO2}-${X_INFO3}-logs"
}
}
ERROR IM GETTING:
Error: Error running command 'aws elb modify-load-balancer-attributes --load-balancer-name awseb-e-5-AWSEBLoa-ABCDE0FGHI0V --load-balancer-attributes {
"AccessLog": {
"Enabled": true,
"S3BucketName": "test1-test2-test3-logs",
"EmitInterval": 5,
"S3BucketPrefix": "test1-test2-test3-logs"
}
}
': exit status 2. Output:
Error parsing parameter '--load-balancer-attributes': Invalid JSON: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
JSON received: {
/bin/sh: line 1: AccessLog:: command not found
/bin/sh: line 2: Enabled:: command not found
/bin/sh: line 3: S3BucketName:: command not found
/bin/sh: line 4: EmitInterval:: command not found
/bin/sh: line 5: S3BucketPrefix:: command not found
/bin/sh: -c: line 6: syntax error near unexpected token `}'
/bin/sh: -c: line 6: ` }'
ISSUE / PROBLEM:
The template file successfully updates the variable assignments (X_INFO1, X_INFO2, X_INFO23). Seems like the issue is on the ${data.template_file.lb-to-s3-log.rendered} of the aws cli command.
Same error when I tried to substitute the file from lb-s3log.tpl to lb-s3log.json.
I'm using Terraform v0.14, I followed the process of enabling s3 bucket for log storage of amazon classic load balancer from this documentation
The error is happening because you need to format the JSON to be escaped on the command line or to write the JSON as a file and then use file:// to refer to it.
Wrapping your JSON in single quotes should be enough to escape the shell issues:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes '${data.template_file.lb-to-s3-log.rendered}'"
}
}
You can use the local_file resource to render a file if you'd prefer that option:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "local_file" "elb_attributes" {
content = data.template_file.lb-to-s3-log.rendered
filename = "${path.module}/elb-attributes.json"
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes file://${local_file.elb_attributes.filename}"
}
}
A better alternative here though, unless there's something fundamental preventing it, would be to have Terraform managing the ELB access logs by using the access_logs parameter to the resource:
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
access_logs {
bucket = "foo"
bucket_prefix = "bar"
interval = 60
}
}
You might also want to consider moving to Application Load Balancers or possibly Network Load Balancers depending on your usage as ELBs are a deprecated service.
Finally, it's also worth noting that the template_file data source is deprecated since 0.12 and the templatefile function is preferred instead.

terraform gcp dataflow job is giving me error about name?

this is the terraform I am using.
provider "google" {
credentials = "${file("${var.credentials}")}"
project = "${var.gcp_project}"
region = "${var.region}"
}
resource "google_dataflow_job" "big_data_job" {
#name = "${var.job_name}"
template_gcs_path = "gs://dataflow-templates/wordcount/template_file"
#template_gcs_path = "gs://dataflow-samples/shakespeare/kinglear.txt"
temp_gcs_location = "gs://bucket-60/counts"
max_workers = "${var.max-workers}"
project = "${var.gcp_project}"
zone = "${var.zone}"
parameters {
name = "cloud_dataflow"
}
}
But I am getting this error.so how can i solve this problem:-
enter code here Error: Error applying plan:
1 error(s) occurred:
* google_dataflow_job.big_data_job: 1 error(s) occurred:
* google_dataflow_job.big_data_job: googleapi: Error 400: (4ea5c17a2a9d21ab): The workflow could not be created. Causes: (4ea5c17a2a9d2052): Found unexpected parameters: ['name' (perhaps you meant 'appName')], badRequest
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
In your code you've commented out the name argument, but name is required for this resource type.
Remove the leading # from this line
#name = "${var.job_name}"
You've also included name as a parameter to the dataflow template, but that example wordcount template does not have a name parameter, it only has inputFile and output:
inputFile The Cloud Storage input file path.
output The Cloud Storage output file path and prefix.
Remove this part:
parameters {
name = "cloud_dataflow"
}