Terraform Error: Argument or block definition required when I run TF plan - amazon-web-services

I have 2 rds instances being created and when running tf plan I am getting a terraform error regarding unsupported block type:
Error: Unsupported block type
on rds.tf line 85, in module "rds":
85: resource "random_string" "rds_password_dr" {
Blocks of type "resource" are not expected here.
Error: Unsupported block type
on rds.tf line 95, in module "rds":
95: module "rds_dr" {
Blocks of type "module" are not expected here.
This is my code in my rds.tf file:
# PostgreSQL RDS App Instance
module "rds" {
source = "git#github.com:************"
name = var.rds_name_app
engine = var.rds_engine_app
engine_version = var.rds_engine_version_app
family = var.rds_family_app
instance_class = var.rds_instance_class_app
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_app
"
"
# PostgreSQL RDS DR Password
resource "random_string" "rds_password_dr" {
length = 16
override_special = "!&*-_=+[]{}<>:?"
keepers = {
rds_id = "${var.rds_name_dr}-${var.environment}-${var.rds_engine_dr}"
}
}
# PostgreSQL RDS DR Instance
module "rds_dr" {
source = "git#github.com:notarize/terraform-aws-rds.git?ref=v0.0.1"
name = var.rds_name_dr
engine = var.rds_engine_dr
engine_version = var.rds_engine_version_dr
family = var.rds_family_dr
instance_class = var.rds_instance_class_dr
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_dr
"
"
I don't know why I am getting this? Someone please help me.

You haven't closed the module blocks (module "rds" and module "rds_dr"). You also have a couple of strange double-quotes at the end of both module blocks.
Remove the double-quotes and close the blocks (with }).

Related

AWS Terraform , Cannot find upgrade path from 5.7.38 to 5.6 ,GitLab CI Runners faild InvalidParameterCombinatin

Everytime I run the pipline , GitLab CI Runners faild the job with the following error message ::
│ Error: Error modifying DB Instance legacy-dms: InvalidParameterCombination: Cannot find upgrade path from 5.7.38 to 5.6.
│ status code: 400, request id: e7740193-bf98-464c-a1b3-4124d7f5d909
│
│ with module.db.module.db_instance.aws_db_instance.this[0],
│ on .terraform/modules/db/modules/db_instance/main.tf line 45, in resource "aws_db_instance" "this":
│ 45: resource "aws_db_instance" "this" {
│
╵
Terraform hcl File :
inputs = {
# Identifier is name in AWS and should be unique in the account
identifier = "test-dms"
# Name is actual DB name (doesn't need to be unique)
name = "PNRQ"
# Set the following carefully on valid RDS values:
engine = "mysql"
engine_version = "5.6"
# One year old bug, creating option groups when the default should be used
# So we have to explicitly set it to the default
# https://github.com/terraform-aws-modules/terraform-aws-rds/issues/272
# https://github.com/terraform-aws-modules/terraform-aws-rds/issues/188
option_group_name = "default:mysql-5-6"
port = 3306
# Change these depending on size/load requirements of the DB and environment
instance_class = "db.t3.micro"
allocated_storage = 20
multi_az = false
# Boilerplate for VPC
vpc_id = dependency.vpc.outputs.vpc_id
vpc_subnet_ids = dependency.vpc.outputs.private_subnets
allowed_security_groups = [
# Allow EKS connection
dependency.eks.outputs.worker_security_group_id,
# Allow infra runner connection
dependency.infra_ci.outputs.runner_sg_id
]
}
We have an aws RDS Instance name: test-dms Engine with version 5.7.38.
and I have already updated the ACM certificate associated with this account.
Can anyone assist me in resolving this problem? I would greatly appreciate any help provided.
I'm not sure why you mentioned an ACM certificate since it has nothing to do with any of the code, or the error message, in your question.
As for the error you are getting, you have a MySQL 5.7 RDS server running in AWS, and you have Terraform configured to deploy a MySQL 5.6 server. The Terraform error is telling you that you are asking it to downgrade the server to a previous version, and AWS doesn't allow you to do that. You need to update your Terraform code to specify MySQL 5.7.

Dynamically run terraform block based on input variable value

The Problem:
AWS doesn't support enhanced monitoring for t3.small instances which is what we use for smaller deployments of RDS but does on larger instance sizes for RDS. We want to disable it in Terraform when the instance class is t3.
Looking at the terraform resource docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance seems like you don't specify interval and role when you don't want to enable enhanced monitoring.
I'm trying to dynamically execute the resource block based on what the monitoring interval is set to. Thus, when its set to 0 run the block without monitoring role arn and when its set to anything other than 0 run the block where it is set.
however I'm getting an error:
╷
│ Error: Missing newline after argument
│
│ On main.tf line 68: An argument definition must end with a newline.
╵
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.
I was following the following stack post: How to conditional create resource in Terraform based on a string variable
but it doesn't seem to work with the above error.
Here is my terraform code:
resource "aws_rds_cluster_instance" "cluster_instances" {
count = var.monitoring_interval != "0" ? var.cluster_instance_count : 0
identifier = "${var.service}-${var.environment}-${count.index}"
cluster_identifier = aws_rds_cluster.default.id
instance_class = var.instance_class
engine = aws_rds_cluster.default.engine
monitoring_role_arn = var.monitoring_role
engine_version = aws_rds_cluster.default.engine_version
monitoring_interval = var.monitoring_interval
db_parameter_group_name = var.regional_instance_param_group_name
copy_tags_to_snapshot = true
publicly_accessible = false
db_subnet_group_name = var.regional_subnet_group_name
}
resource "aws_rds_cluster_instance" "cluster_instances" {
count = var.monitoring_interval = "0" ? var.cluster_instance_count : 0
identifier = "${var.service}-${var.environment}-${count.index}"
cluster_identifier = aws_rds_cluster.default.id
instance_class = var.instance_class
engine = aws_rds_cluster.default.engine
engine_version = aws_rds_cluster.default.engine_version
db_parameter_group_name = var.regional_instance_param_group_name
copy_tags_to_snapshot = true
publicly_accessible = false
db_subnet_group_name = var.regional_subnet_group_name
}
Line Reference:
Thanks for your help. Probably something small I'm just missing or misunderstood about terraform conditionals.
You're missing an = in your condition. Change it to this:
var.monitoring_interval == "0" ? var.cluster_instance_count : 0

Unable to create new EKS with terraform

I'm having problems creating a new EKS version 1.22 in a dev environment.
I'm using the module in Terraform registry, trimming some parts since it's only for testing purposes (we just want to test the version 1.22).
I'm using a VPC that was created for testing EKS's, and 2 public subnets and 2 private subnets.
This is my main.tf:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.21.0"
cluster_name = "EKSv2-update-test"
cluster_version = "1.22"
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
vpc_id = "vpc-xxx" # eks-vpc
subnet_ids = ["subnet-priv-1-xxx", "subnet-priv-2-xxx", "subnet-pub-1-xxx", "subnet-pub-2-xxx"]
}
Terraform apply times out after 20 min (it just hangs on module.eks.aws_eks_addon.this["coredns"]: Still creating... [20m0s elapsed])
and this is the error
│ Error: unexpected EKS Add-On (EKSv2-update-test:coredns) state returned during creation: timeout while waiting for state to become 'ACTIVE' (last state: 'DEGRADED', timeout: 20m0s)
│ [WARNING] Running terraform apply again will remove the kubernetes add-on and attempt to create it again effectively purging previous add-on configuration
│
│ with module.eks.aws_eks_addon.this["coredns"],
│ on .terraform/modules/eks/main.tf line 305, in resource "aws_eks_addon" "this":
│ 305: resource "aws_eks_addon" "this" {
The EKS gets created, but this is clearly not the way to go.
Regarding coredns, what am I missing?
Thanks
a minimum of 2 cluster nodes are required for addon coredns to meet its requirements for its replica set

Terraform google_logging_project_sink 'Exclusions' unknown block type

I'm running the latest google provider and trying to use the example terraform registry code to create a log sink. However the exclusion block is unrecognized
I keep getting 'An argument named "exclusions" is not expected here'
Any ideas on where I am going wrong?
resource "google_logging_project_sink" "log-bucket" {
name = "my-logging-sink"
destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/_Default"
exclusions {
name = "nsexcllusion1"
description = "Exclude logs from namespace-1 in k8s"
filter = "resource.type = k8s_container resource.labels.namespace_name=\"namespace-1\" "
}
exclusions {
name = "nsexcllusion2"
description = "Exclude logs from namespace-2 in k8s"
filter = "resource.type = k8s_container resource.labels.namespace_name=\"namespace-2\" "
}
unique_writer_identity = true
Showing that the version of Google provider is at the stated version in the comment below
$ terraform version
Terraform v0.12.29
+ provider.datadog v2.21.0
+ provider.google v3.44.0
+ provider.google-beta v3.57.0
Update: Have also tried 0.14 of Terraform and that makes no difference.
Error: Unsupported block type
on ..\..\..\..\modules\krtyen\datadog\main.tf line 75, in module "export_logs_to_datadog_log_sink":
75: exclusions {
Blocks of type "exclusions" are not expected here.
Releasing state lock. This may take a few moments...
[terragrunt] 2021/02/22 11:11:20 Hit multiple errors:
exit status 1
You have to upgrade you google provided. exclusions block has been added in version v3.44.0:
logging: Added support for exclusions options for google_logging_project_sink

terraform gcp dataflow job is giving me error about name?

this is the terraform I am using.
provider "google" {
credentials = "${file("${var.credentials}")}"
project = "${var.gcp_project}"
region = "${var.region}"
}
resource "google_dataflow_job" "big_data_job" {
#name = "${var.job_name}"
template_gcs_path = "gs://dataflow-templates/wordcount/template_file"
#template_gcs_path = "gs://dataflow-samples/shakespeare/kinglear.txt"
temp_gcs_location = "gs://bucket-60/counts"
max_workers = "${var.max-workers}"
project = "${var.gcp_project}"
zone = "${var.zone}"
parameters {
name = "cloud_dataflow"
}
}
But I am getting this error.so how can i solve this problem:-
enter code here Error: Error applying plan:
1 error(s) occurred:
* google_dataflow_job.big_data_job: 1 error(s) occurred:
* google_dataflow_job.big_data_job: googleapi: Error 400: (4ea5c17a2a9d21ab): The workflow could not be created. Causes: (4ea5c17a2a9d2052): Found unexpected parameters: ['name' (perhaps you meant 'appName')], badRequest
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
In your code you've commented out the name argument, but name is required for this resource type.
Remove the leading # from this line
#name = "${var.job_name}"
You've also included name as a parameter to the dataflow template, but that example wordcount template does not have a name parameter, it only has inputFile and output:
inputFile The Cloud Storage input file path.
output The Cloud Storage output file path and prefix.
Remove this part:
parameters {
name = "cloud_dataflow"
}