terraform gcp dataflow job is giving me error about name? - google-cloud-platform

this is the terraform I am using.
provider "google" {
credentials = "${file("${var.credentials}")}"
project = "${var.gcp_project}"
region = "${var.region}"
}
resource "google_dataflow_job" "big_data_job" {
#name = "${var.job_name}"
template_gcs_path = "gs://dataflow-templates/wordcount/template_file"
#template_gcs_path = "gs://dataflow-samples/shakespeare/kinglear.txt"
temp_gcs_location = "gs://bucket-60/counts"
max_workers = "${var.max-workers}"
project = "${var.gcp_project}"
zone = "${var.zone}"
parameters {
name = "cloud_dataflow"
}
}
But I am getting this error.so how can i solve this problem:-
enter code here Error: Error applying plan:
1 error(s) occurred:
* google_dataflow_job.big_data_job: 1 error(s) occurred:
* google_dataflow_job.big_data_job: googleapi: Error 400: (4ea5c17a2a9d21ab): The workflow could not be created. Causes: (4ea5c17a2a9d2052): Found unexpected parameters: ['name' (perhaps you meant 'appName')], badRequest
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

In your code you've commented out the name argument, but name is required for this resource type.
Remove the leading # from this line
#name = "${var.job_name}"
You've also included name as a parameter to the dataflow template, but that example wordcount template does not have a name parameter, it only has inputFile and output:
inputFile The Cloud Storage input file path.
output The Cloud Storage output file path and prefix.
Remove this part:
parameters {
name = "cloud_dataflow"
}

Related

Terragrunt - dynamically add TF and TG versions to AWS default tags

I have a default tags block and would like to add new tags showing the TG and TF versions used in deployment.
I assumed this would work, but I was wrong..
locals {
terraform_version = "${run_cmd("terraform --version")}"
terragrunt_version = "${run_cmd("terragrunt --version")}"
}
provider "aws" {
default_tags {
tags = {
terraform_version = local.terraform_version
terragrunt_version = local.terragrunt_version
}
}
}
I'm sure there's a simple way to do this, but it alludes me.
Here's the error message:
my-mac$ terragrunt apply
ERRO[0000] Error: Error in function call
ERRO[0000] on /Users/me/git/terraform/environments/terragrunt.hcl line 8, in locals:
ERRO[0000] 8: terraform_version = "${run_cmd("terraform --version")}"
ERRO[0000]
ERRO[0000] Call to function "run_cmd" failed: exec: "terraform --version": executable file not found in $PATH.
ERRO[0000] Encountered error while evaluating locals in file /Users/me/git/terraform/environments/terragrunt.hcl
ERRO[0000] /Users/me/git/terraform/environments/terragrunt.hcl:8,31-39: Error in function call; Call to function "run_cmd" failed: exec: "terraform --version": executable file not found in $PATH.
ERRO[0000] Unable to determine underlying exit code, so Terragrunt will exit with error code 1
The run_cmd function uses separate parameters for the command to run and the args to pass. Your example tries to run the command "terraform --version" and not terraform --version. You should update your code like the following:
locals {
terraform_version = "${run_cmd("terraform", "--version")}"
terragrunt_version = "${run_cmd("terragrunt", "--version")}"
}
Building on jordanm's good work, I found the TG version was good but I needed to remove some verbosity in the TF output for it to be usable as an aws tag.
locals {
terraform_version = "${run_cmd("/bin/bash", "-c", terraform --version | sed 1q")}"
terragrunt_version = "${run_cmd("terragrunt", "--version")}"
}
Good work everybody!

Reference to undeclared resource in Terraform

I am trying to run a test server on aws using terraform. When i run terraform apply it throws an error saying Reference to undeclared resource. Below is my test server file inside terraform.
test-server.tf
module "test-server" {
source = "./node-server"
ami-id = "Here ive given my ami_id"
key-pair = aws_key_pair.microservices-demo-key.key_name
name = "Test Server"
}
Below is my key pair file code.
key-pairs
resource "aws_key_pair" "microservcies-demo-key" {
key_name = "microservices-demo-key"
public_key = file("./microservices_demo.pem")
}
Error detail thrown by terraform:
Error: Reference to undeclared resource
on test-server.tf line 4, in module "test-server":
4: key-pair = aws_key_pair.microservices-demo-key.key_name
A managed resource "aws_key_pair" "microservices-demo-key" has not been declared in the root module.
Although ive declard the variables. Its still throwing the error.
This is the image of the directory.
You have a typo here:
resource "aws_key_pair" "microservcies-demo-key" {
Fix this name to be microservices-demo-key so that it matches the name you reference in test-server.tf.

Terraform google_logging_project_sink 'Exclusions' unknown block type

I'm running the latest google provider and trying to use the example terraform registry code to create a log sink. However the exclusion block is unrecognized
I keep getting 'An argument named "exclusions" is not expected here'
Any ideas on where I am going wrong?
resource "google_logging_project_sink" "log-bucket" {
name = "my-logging-sink"
destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/_Default"
exclusions {
name = "nsexcllusion1"
description = "Exclude logs from namespace-1 in k8s"
filter = "resource.type = k8s_container resource.labels.namespace_name=\"namespace-1\" "
}
exclusions {
name = "nsexcllusion2"
description = "Exclude logs from namespace-2 in k8s"
filter = "resource.type = k8s_container resource.labels.namespace_name=\"namespace-2\" "
}
unique_writer_identity = true
Showing that the version of Google provider is at the stated version in the comment below
$ terraform version
Terraform v0.12.29
+ provider.datadog v2.21.0
+ provider.google v3.44.0
+ provider.google-beta v3.57.0
Update: Have also tried 0.14 of Terraform and that makes no difference.
Error: Unsupported block type
on ..\..\..\..\modules\krtyen\datadog\main.tf line 75, in module "export_logs_to_datadog_log_sink":
75: exclusions {
Blocks of type "exclusions" are not expected here.
Releasing state lock. This may take a few moments...
[terragrunt] 2021/02/22 11:11:20 Hit multiple errors:
exit status 1
You have to upgrade you google provided. exclusions block has been added in version v3.44.0:
logging: Added support for exclusions options for google_logging_project_sink

Files are not archived in terraform before uploaded to GCP

Despite using depends_on directive, it looks like zip is not created before trying to put it in the bucket. Considering pipeline output, somehow it just omits archiving the file before firing upload to bucket. Both files (index.js and package.json) exists.
resource "google_storage_bucket" "cloud-functions" {
project = var.project-1-id
name = "${var.project-1-id}-cloud-functions"
location = var.project-1-region
}
resource "google_storage_bucket_object" "start_instance" {
name = "start_instance.zip"
bucket = google_storage_bucket.cloud-functions.name
source = "${path.module}/start_instance.zip"
depends_on = [
data.archive_file.start_instance,
]
}
data "archive_file" "start_instance" {
type = "zip"
output_path = "${path.module}/start_instance.zip"
source {
content = file("${path.module}/scripts/start_instance/index.js")
filename = "index.js"
}
source {
content = file("${path.module}/scripts/start_instance/package.json")
filename = "package.json"
}
}
Terraform has been successfully initialized!
$ terraform apply -input=false "planfile"
google_storage_bucket_object.stop_instance: Creating...
google_storage_bucket_object.start_instance: Creating...
Error: open ./start_instance.zip: no such file or directory
on cloud_functions.tf line 41, in resource "google_storage_bucket_object" "start_instance":
41: resource "google_storage_bucket_object" "start_instance" {
LOGS:
2020-11-18T13:02:56.796Z [DEBUG] plugin.terraform-provider-google_v3.40.0_x5: 2020/11/18 13:02:56 [WARN] Failed to read source file "./start_instance.zip". Cannot compute md5 hash for it.
2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.stop_instance, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)
2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.start_instance, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)
I have exactly the same issue with GitLab CI/CD pipeline. After some digging, according to the discussion I found out that with this setup, the plan and apply stages are run in separate containers, and the archiving step is executed in the plan stage.
A workaround is to create a dummy trigger with null_resource and force the archive_file to depend on it, and, hence, to be executed in the apply stage.
resource null_resource dummy_trigger {
triggers = {
timestamp = timestamp()
}
}
resource "google_storage_bucket" "cloud-functions" {
project = var.project-1-id
name = "${var.project-1-id}-cloud-functions"
location = var.project-1-region
}
resource "google_storage_bucket_object" "start_instance" {
name = "start_instance.zip"
bucket = google_storage_bucket.cloud-functions.name
source = "${path.module}/start_instance.zip"
depends_on = [
data.archive_file.start_instance,
]
}
data "archive_file" "start_instance" {
type = "zip"
output_path = "${path.module}/start_instance.zip"
source {
content = file("${path.module}/scripts/start_instance/index.js")
filename = "index.js"
}
source {
content = file("${path.module}/scripts/start_instance/package.json")
filename = "package.json"
}
depends_on = [
resource.null_resource.dummy_trigger,
]
}

Terraform Error: Argument or block definition required when I run TF plan

I have 2 rds instances being created and when running tf plan I am getting a terraform error regarding unsupported block type:
Error: Unsupported block type
on rds.tf line 85, in module "rds":
85: resource "random_string" "rds_password_dr" {
Blocks of type "resource" are not expected here.
Error: Unsupported block type
on rds.tf line 95, in module "rds":
95: module "rds_dr" {
Blocks of type "module" are not expected here.
This is my code in my rds.tf file:
# PostgreSQL RDS App Instance
module "rds" {
source = "git#github.com:************"
name = var.rds_name_app
engine = var.rds_engine_app
engine_version = var.rds_engine_version_app
family = var.rds_family_app
instance_class = var.rds_instance_class_app
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_app
"
"
# PostgreSQL RDS DR Password
resource "random_string" "rds_password_dr" {
length = 16
override_special = "!&*-_=+[]{}<>:?"
keepers = {
rds_id = "${var.rds_name_dr}-${var.environment}-${var.rds_engine_dr}"
}
}
# PostgreSQL RDS DR Instance
module "rds_dr" {
source = "git#github.com:notarize/terraform-aws-rds.git?ref=v0.0.1"
name = var.rds_name_dr
engine = var.rds_engine_dr
engine_version = var.rds_engine_version_dr
family = var.rds_family_dr
instance_class = var.rds_instance_class_dr
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_dr
"
"
I don't know why I am getting this? Someone please help me.
You haven't closed the module blocks (module "rds" and module "rds_dr"). You also have a couple of strange double-quotes at the end of both module blocks.
Remove the double-quotes and close the blocks (with }).