aws_elb terraform error failed to load root config module - amazon-web-services

Here's the block of code for aws_elb from main.tf.
resource "aws_elb" "terraformelb" {
name = "terraformelb"
subnets = ["${aws_subnet.public_subnet.id}"]
security_groups = ["${aws_security_group.web_sg.id}"]
instances = ["${aws_instance.web_*.id}"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
I have followed terraform syntax and I still get the error
Failed to load root config module: Error loading C:\Users\snadella001\Downloads\Terraform\repo\main.tf: Error reading config for aws_elb[terraform-elb]: parse error at 1:21: expected expression but found "."

The error message is for resource terraform-elb (a minus in the name)
But your resource name is terraformelb
You need to make sure the names are same.

Looks like your instances section is wrong, it should look something like this, I'm guessing (not being able to see the rest of your code):
instances = ["${aws_instance.web.*.id}"]

Related

Terraform AWS - Unable to update Transfer Server with incomplete error message

I am trying to update a test AWS Transfer Server because I was unable to connect to it via SFTP
Now trying to use the FTP / FTPS protocols, I have used the same layout as the example here
This is the example in the docs
resource "aws_transfer_server" "example" {
endpoint_type = "VPC"
endpoint_details {
subnet_ids = [aws_subnet.example.id]
vpc_id = aws_vpc.example.id
}
protocols = ["FTP", "FTPS"]
certificate = aws_acm_certificate.example.arn
identity_provider_type = "API_GATEWAY"
url = "${aws_api_gateway_deployment.example.invoke_url}${aws_api_gateway_resource.example.path}"
}
And here is my code
resource "aws_transfer_server" "transfer_x3" {
tags = {
Name = "${var.app}-${var.env}-transfer-x3-server"
}
endpoint_type = "VPC"
endpoint_details {
vpc_id = data.aws_vpc.vpc_global.id
subnet_ids = [data.aws_subnet.vpc_subnet_pri_commande_a.id, data.aws_subnet.vpc_subnet_pri_commande_b.id]
}
protocols = ["FTP", "FTPS"]
certificate = var.certificate_arn
identity_provider_type = "API_GATEWAY"
url = "https://${aws_api_gateway_rest_api.Api.id}.execute-api.${var.region}.amazonaws.com/latest/servers/{serverId}/users/{username}/config"
invocation_role = data.aws_iam_role.terraform-commande.arn
}
And here is the error message
╷
│ Error: error creating Transfer Server: InvalidRequestException: Bad value in IdentityProviderDetails
│
│ with aws_transfer_server.transfer_x3,
│ on transfer-x3.tf line 1, in resource "aws_transfer_server" "transfer_x3":
│ 1: resource "aws_transfer_server" "transfer_x3" {
│
╵
My guess is, it doesn't like the value in the url parameter
I have tried using the same form as one provided in the example: url = "${aws_api_gateway_deployment.ApiDeployment.invoke_url}${aws_api_gateway_resource.ApiResourceServerIdUserUsernameConfig.path}", but encountered the same error message
I have tried ordering the parameters around if it was that, but I had the same error over and over when I use the command terraform apply
The commands terraform validate and terraform plan didn't show the error message at all
What value could the url parameter need? Or is there a parameter missing in my resource declaration?
As per the documentation (CloudFormation in this case) [1], the examples say the only thing needed is the invoke URL of the API Gateway:
.
.
.
"IdentityProviderDetails": {
"InvocationRole": "Invocation-Role-ARN",
"Url": "API_GATEWAY-Invocation-URL"
},
"IdentityProviderType": "API_GATEWAY",
.
.
.
Comparing that to the attributes provided by the API Gateway stage resource in terraform, the only thing that is needed is the invoke_url attribute [2].
[1] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-transfer-server.html#aws-resource-transfer-server--examples
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_stage#invoke_url

error modifying Lambda Function configuration : ValidationException with Lambda and VPC

I am building a lambda in terraform using it's AWS module and my code is as below:
module "lambda_function" {
# * Lambda module configs
source = "terraform-aws-modules/lambda/aws"
version = "3.0.0"
# * Lambda Configs
function_name = "${var.function_name}-${var.env}"
description = "My Project"
handler = local.constants.lambda.HANDLER
runtime = local.constants.lambda.VERSION
memory_size = 128
cloudwatch_logs_retention_in_days = 14
source_path = "./function/"
timeout = local.constants.lambda.TIMEOUT
create_async_event_config = true
maximum_retry_attempts = local.constants.lambda.RETRIES_ATTEMPT
layers = [
data.aws_lambda_layer_version.layer_requests.arn
]
environment_variables = {
AWS_ACCOUNT = var.env
SLACK_HOOK_CHANNEL = var.SLACK_HOOK_CHANNEL
}
tags = {
Name = "${var.function_name}-${var.env}"
}
trusted_entities = local.constants.lambda.TRUSTED_ENTITIES
}
This code works fine and the lambda get's deployed. Now i need to put the lambda in the VPC. When i add the code below in the resource block, i get the error error modifying Lambda Function (lambda_name) configuration : ValidationException: │ status code: 400, request id: de2641f6-1125-4c83-87fa-3fe32dee7b06 │ │ with module.lambda_function.aws_lambda_function.this[0], │ on .terraform/modules/lambda_function/main.tf line 22, in resource "aws_lambda_function" "this": │ 22: resource "aws_lambda_function" "this" {
The code for the vpc is:
# * VPC configurations
vpc_subnet_ids = ["10.21.0.0/26", "10.21.0.64/26", "10.21.0.128/26"]
vpc_security_group_ids = ["sg-ffffffffff"] # Using a dummy value here
attach_network_policy = true
If i use the same values in the AWS console and deploy the lambda in the VPC, it works fine.
Can someone please help ?
You have to provide valid subnet ids, not CIDR ranges. So instead of
vpc_subnet_ids = ["10.21.0.0/26", "10.21.0.64/26", "10.21.0.128/26"]
it should be
vpc_subnet_ids = ["subnet-asfid1", "subnet-asfid2", "subnet-as4id1"]

terraform data source output to file

I would like to try if terraform data source will be able to place the output to a text file.
I was looking on it online but not able to find any, I plan to perform on getting the load balancer name and after that our automation script will perform aws-cli command and will use the load balancer name taken by the data-source
If your CLB name is autogenrated by TF, you can save it in a file using local_file:
resource "aws_elb" "clb" {
availability_zones = ["ap-southeast-2a"]
listener {
instance_port = 8000
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
resource "local_file" "foo" {
content = <<-EOL
${aws_elb.clb.name}
EOL
filename = "${path.module}/clb_name.txt"
}
output "clb_name" {
value = aws_elb.clb.name
}
But maybe it would be easier to get the output value directly as json:
clb_name=$(terraform output -json clb_name | jq -r)
echo ${clb_name}

Terraform Error: Argument or block definition required when I run TF plan

I have 2 rds instances being created and when running tf plan I am getting a terraform error regarding unsupported block type:
Error: Unsupported block type
on rds.tf line 85, in module "rds":
85: resource "random_string" "rds_password_dr" {
Blocks of type "resource" are not expected here.
Error: Unsupported block type
on rds.tf line 95, in module "rds":
95: module "rds_dr" {
Blocks of type "module" are not expected here.
This is my code in my rds.tf file:
# PostgreSQL RDS App Instance
module "rds" {
source = "git#github.com:************"
name = var.rds_name_app
engine = var.rds_engine_app
engine_version = var.rds_engine_version_app
family = var.rds_family_app
instance_class = var.rds_instance_class_app
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_app
"
"
# PostgreSQL RDS DR Password
resource "random_string" "rds_password_dr" {
length = 16
override_special = "!&*-_=+[]{}<>:?"
keepers = {
rds_id = "${var.rds_name_dr}-${var.environment}-${var.rds_engine_dr}"
}
}
# PostgreSQL RDS DR Instance
module "rds_dr" {
source = "git#github.com:notarize/terraform-aws-rds.git?ref=v0.0.1"
name = var.rds_name_dr
engine = var.rds_engine_dr
engine_version = var.rds_engine_version_dr
family = var.rds_family_dr
instance_class = var.rds_instance_class_dr
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_dr
"
"
I don't know why I am getting this? Someone please help me.
You haven't closed the module blocks (module "rds" and module "rds_dr"). You also have a couple of strange double-quotes at the end of both module blocks.
Remove the double-quotes and close the blocks (with }).

Get endpoint for Terraform with aws_elasticache_replication_group

I have what I think is a simple Terraform config for AWS ElastiCache with Redis:
resource "aws_elasticache_replication_group" "my_replication_group" {
replication_group_id = "my-rep-group",
replication_group_description = "eln00b"
node_type = "cache.m4.large"
port = 6379
parameter_group_name = "default.redis5.0.cluster.on"
snapshot_retention_limit = 1
snapshot_window = "00:00-05:00"
subnet_group_name = "${aws_elasticache_subnet_group.my_subnet_group.name}"
automatic_failover_enabled = true
cluster_mode {
num_node_groups = 1
replicas_per_node_group = 1
}
}
I tried to define the endpoint output using:
output "my_cache" {
value = "${aws_elasticache_replication_group.my_replication_group.primary_endpoint_address}"
}
When I run an apply through terragrunt I get:
Error: Error running plan: 1 error(s) occurred:
module.mod.output.my_cache: Resource 'aws_elasticache_replication_group.my_replication_group' does not have attribute 'primary_endpoint_address' for variable 'aws_elasticache_replication_group.my_replication_group.primary_endpoint_address'
What am I doing wrong here?
The primary_endpoint_address attribute is only available for non cluster-mode Redis replication groups as mentioned in the docs:
primary_endpoint_address - (Redis only) The address of the endpoint for the primary node in the replication group, if the cluster mode is disabled.
When using cluster mode you should use configuration_endpoint_address instead to connect to the Redis cluster.