I am trying to run a test server on aws using terraform. When i run terraform apply it throws an error saying Reference to undeclared resource. Below is my test server file inside terraform.
test-server.tf
module "test-server" {
source = "./node-server"
ami-id = "Here ive given my ami_id"
key-pair = aws_key_pair.microservices-demo-key.key_name
name = "Test Server"
}
Below is my key pair file code.
key-pairs
resource "aws_key_pair" "microservcies-demo-key" {
key_name = "microservices-demo-key"
public_key = file("./microservices_demo.pem")
}
Error detail thrown by terraform:
Error: Reference to undeclared resource
on test-server.tf line 4, in module "test-server":
4: key-pair = aws_key_pair.microservices-demo-key.key_name
A managed resource "aws_key_pair" "microservices-demo-key" has not been declared in the root module.
Although ive declard the variables. Its still throwing the error.
This is the image of the directory.
You have a typo here:
resource "aws_key_pair" "microservcies-demo-key" {
Fix this name to be microservices-demo-key so that it matches the name you reference in test-server.tf.
Related
I am trying to update a test AWS Transfer Server because I was unable to connect to it via SFTP
Now trying to use the FTP / FTPS protocols, I have used the same layout as the example here
This is the example in the docs
resource "aws_transfer_server" "example" {
endpoint_type = "VPC"
endpoint_details {
subnet_ids = [aws_subnet.example.id]
vpc_id = aws_vpc.example.id
}
protocols = ["FTP", "FTPS"]
certificate = aws_acm_certificate.example.arn
identity_provider_type = "API_GATEWAY"
url = "${aws_api_gateway_deployment.example.invoke_url}${aws_api_gateway_resource.example.path}"
}
And here is my code
resource "aws_transfer_server" "transfer_x3" {
tags = {
Name = "${var.app}-${var.env}-transfer-x3-server"
}
endpoint_type = "VPC"
endpoint_details {
vpc_id = data.aws_vpc.vpc_global.id
subnet_ids = [data.aws_subnet.vpc_subnet_pri_commande_a.id, data.aws_subnet.vpc_subnet_pri_commande_b.id]
}
protocols = ["FTP", "FTPS"]
certificate = var.certificate_arn
identity_provider_type = "API_GATEWAY"
url = "https://${aws_api_gateway_rest_api.Api.id}.execute-api.${var.region}.amazonaws.com/latest/servers/{serverId}/users/{username}/config"
invocation_role = data.aws_iam_role.terraform-commande.arn
}
And here is the error message
╷
│ Error: error creating Transfer Server: InvalidRequestException: Bad value in IdentityProviderDetails
│
│ with aws_transfer_server.transfer_x3,
│ on transfer-x3.tf line 1, in resource "aws_transfer_server" "transfer_x3":
│ 1: resource "aws_transfer_server" "transfer_x3" {
│
╵
My guess is, it doesn't like the value in the url parameter
I have tried using the same form as one provided in the example: url = "${aws_api_gateway_deployment.ApiDeployment.invoke_url}${aws_api_gateway_resource.ApiResourceServerIdUserUsernameConfig.path}", but encountered the same error message
I have tried ordering the parameters around if it was that, but I had the same error over and over when I use the command terraform apply
The commands terraform validate and terraform plan didn't show the error message at all
What value could the url parameter need? Or is there a parameter missing in my resource declaration?
As per the documentation (CloudFormation in this case) [1], the examples say the only thing needed is the invoke URL of the API Gateway:
.
.
.
"IdentityProviderDetails": {
"InvocationRole": "Invocation-Role-ARN",
"Url": "API_GATEWAY-Invocation-URL"
},
"IdentityProviderType": "API_GATEWAY",
.
.
.
Comparing that to the attributes provided by the API Gateway stage resource in terraform, the only thing that is needed is the invoke_url attribute [2].
[1] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-transfer-server.html#aws-resource-transfer-server--examples
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_stage#invoke_url
I have a default tags block and would like to add new tags showing the TG and TF versions used in deployment.
I assumed this would work, but I was wrong..
locals {
terraform_version = "${run_cmd("terraform --version")}"
terragrunt_version = "${run_cmd("terragrunt --version")}"
}
provider "aws" {
default_tags {
tags = {
terraform_version = local.terraform_version
terragrunt_version = local.terragrunt_version
}
}
}
I'm sure there's a simple way to do this, but it alludes me.
Here's the error message:
my-mac$ terragrunt apply
ERRO[0000] Error: Error in function call
ERRO[0000] on /Users/me/git/terraform/environments/terragrunt.hcl line 8, in locals:
ERRO[0000] 8: terraform_version = "${run_cmd("terraform --version")}"
ERRO[0000]
ERRO[0000] Call to function "run_cmd" failed: exec: "terraform --version": executable file not found in $PATH.
ERRO[0000] Encountered error while evaluating locals in file /Users/me/git/terraform/environments/terragrunt.hcl
ERRO[0000] /Users/me/git/terraform/environments/terragrunt.hcl:8,31-39: Error in function call; Call to function "run_cmd" failed: exec: "terraform --version": executable file not found in $PATH.
ERRO[0000] Unable to determine underlying exit code, so Terragrunt will exit with error code 1
The run_cmd function uses separate parameters for the command to run and the args to pass. Your example tries to run the command "terraform --version" and not terraform --version. You should update your code like the following:
locals {
terraform_version = "${run_cmd("terraform", "--version")}"
terragrunt_version = "${run_cmd("terragrunt", "--version")}"
}
Building on jordanm's good work, I found the TG version was good but I needed to remove some verbosity in the TF output for it to be usable as an aws tag.
locals {
terraform_version = "${run_cmd("/bin/bash", "-c", terraform --version | sed 1q")}"
terragrunt_version = "${run_cmd("terragrunt", "--version")}"
}
Good work everybody!
I'm trying to use null resource using local-exec-provisioner for enabling the s3 bucket logging on load balancer using the template file. Both of the terraform file and template file (lb-to-s3-log.tpl) are on same directory "/modules/lb-to-s3-log" however getting an error. Terraform file looks this way:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes ${data.template_file.lb-to-s3-log.rendered}"
}
}
WHERE:
var.INFO1 = test1
var.INFO2 = test2
var.INFO3 = test3
AND TEMPLATE (TPL) FILE CONTAINS:
{
"AccessLog": {
"Enabled": true,
"S3BucketName": "${X_INFO1}-${X_INFO2}-${X_INFO3}-logs",
"EmitInterval": 5,
"S3BucketPrefix": "${X_INFO1}-${X_INFO2}-${X_INFO3}-logs"
}
}
ERROR IM GETTING:
Error: Error running command 'aws elb modify-load-balancer-attributes --load-balancer-name awseb-e-5-AWSEBLoa-ABCDE0FGHI0V --load-balancer-attributes {
"AccessLog": {
"Enabled": true,
"S3BucketName": "test1-test2-test3-logs",
"EmitInterval": 5,
"S3BucketPrefix": "test1-test2-test3-logs"
}
}
': exit status 2. Output:
Error parsing parameter '--load-balancer-attributes': Invalid JSON: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
JSON received: {
/bin/sh: line 1: AccessLog:: command not found
/bin/sh: line 2: Enabled:: command not found
/bin/sh: line 3: S3BucketName:: command not found
/bin/sh: line 4: EmitInterval:: command not found
/bin/sh: line 5: S3BucketPrefix:: command not found
/bin/sh: -c: line 6: syntax error near unexpected token `}'
/bin/sh: -c: line 6: ` }'
ISSUE / PROBLEM:
The template file successfully updates the variable assignments (X_INFO1, X_INFO2, X_INFO23). Seems like the issue is on the ${data.template_file.lb-to-s3-log.rendered} of the aws cli command.
Same error when I tried to substitute the file from lb-s3log.tpl to lb-s3log.json.
I'm using Terraform v0.14, I followed the process of enabling s3 bucket for log storage of amazon classic load balancer from this documentation
The error is happening because you need to format the JSON to be escaped on the command line or to write the JSON as a file and then use file:// to refer to it.
Wrapping your JSON in single quotes should be enough to escape the shell issues:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes '${data.template_file.lb-to-s3-log.rendered}'"
}
}
You can use the local_file resource to render a file if you'd prefer that option:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "local_file" "elb_attributes" {
content = data.template_file.lb-to-s3-log.rendered
filename = "${path.module}/elb-attributes.json"
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes file://${local_file.elb_attributes.filename}"
}
}
A better alternative here though, unless there's something fundamental preventing it, would be to have Terraform managing the ELB access logs by using the access_logs parameter to the resource:
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
access_logs {
bucket = "foo"
bucket_prefix = "bar"
interval = 60
}
}
You might also want to consider moving to Application Load Balancers or possibly Network Load Balancers depending on your usage as ELBs are a deprecated service.
Finally, it's also worth noting that the template_file data source is deprecated since 0.12 and the templatefile function is preferred instead.
I have 2 rds instances being created and when running tf plan I am getting a terraform error regarding unsupported block type:
Error: Unsupported block type
on rds.tf line 85, in module "rds":
85: resource "random_string" "rds_password_dr" {
Blocks of type "resource" are not expected here.
Error: Unsupported block type
on rds.tf line 95, in module "rds":
95: module "rds_dr" {
Blocks of type "module" are not expected here.
This is my code in my rds.tf file:
# PostgreSQL RDS App Instance
module "rds" {
source = "git#github.com:************"
name = var.rds_name_app
engine = var.rds_engine_app
engine_version = var.rds_engine_version_app
family = var.rds_family_app
instance_class = var.rds_instance_class_app
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_app
"
"
# PostgreSQL RDS DR Password
resource "random_string" "rds_password_dr" {
length = 16
override_special = "!&*-_=+[]{}<>:?"
keepers = {
rds_id = "${var.rds_name_dr}-${var.environment}-${var.rds_engine_dr}"
}
}
# PostgreSQL RDS DR Instance
module "rds_dr" {
source = "git#github.com:notarize/terraform-aws-rds.git?ref=v0.0.1"
name = var.rds_name_dr
engine = var.rds_engine_dr
engine_version = var.rds_engine_version_dr
family = var.rds_family_dr
instance_class = var.rds_instance_class_dr
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_dr
"
"
I don't know why I am getting this? Someone please help me.
You haven't closed the module blocks (module "rds" and module "rds_dr"). You also have a couple of strange double-quotes at the end of both module blocks.
Remove the double-quotes and close the blocks (with }).
this is the terraform I am using.
provider "google" {
credentials = "${file("${var.credentials}")}"
project = "${var.gcp_project}"
region = "${var.region}"
}
resource "google_dataflow_job" "big_data_job" {
#name = "${var.job_name}"
template_gcs_path = "gs://dataflow-templates/wordcount/template_file"
#template_gcs_path = "gs://dataflow-samples/shakespeare/kinglear.txt"
temp_gcs_location = "gs://bucket-60/counts"
max_workers = "${var.max-workers}"
project = "${var.gcp_project}"
zone = "${var.zone}"
parameters {
name = "cloud_dataflow"
}
}
But I am getting this error.so how can i solve this problem:-
enter code here Error: Error applying plan:
1 error(s) occurred:
* google_dataflow_job.big_data_job: 1 error(s) occurred:
* google_dataflow_job.big_data_job: googleapi: Error 400: (4ea5c17a2a9d21ab): The workflow could not be created. Causes: (4ea5c17a2a9d2052): Found unexpected parameters: ['name' (perhaps you meant 'appName')], badRequest
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
In your code you've commented out the name argument, but name is required for this resource type.
Remove the leading # from this line
#name = "${var.job_name}"
You've also included name as a parameter to the dataflow template, but that example wordcount template does not have a name parameter, it only has inputFile and output:
inputFile The Cloud Storage input file path.
output The Cloud Storage output file path and prefix.
Remove this part:
parameters {
name = "cloud_dataflow"
}