I am new to terraform, I am learning modules. I am stuck in a problem.
I have modules folder as root which contain two folders EC2 and IAM and they have terraform code in them. In the same modules folder I have main.tf file which is a module file for calling EC2 instance terraform code.
For your information, EC2 instance folder contain two files one for instance resource and second is for defining variables.
My EC2 file looks like this.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "web" {
count = "${var.ec2_count}"
ami = "${var.ami}"
instance_type = "${var.instance_type}"
tags = {
Name = "App_Instance"
}
}
My Variable.tf file looks like this.
variable "ec2_count" {
type = number
default = 1
}
variable "ami" {
type = string
}
variable "instance_type" {
type = string
}
My main.tf file looks like this.
module "EC2" {
source = "./EC2"
}
Now I want that when I type terraform plan command, it should take input at the command prompt but it is showing me below error.
PS: I don't want to pass the value of the variable in the module.
C:\Users\PC\Desktop\AWS_Modules\Modules>terraform plan
╷
│ Error: Missing required argument
│
│ on main.tf line 1, in module "EC2":
│ 1: module "EC2" {
│
│ The argument "ami" is required, but no definition was found.
╵
╷
│ Error: Missing required argument
│
│ on main.tf line 1, in module "EC2":
│ 1: module "EC2" {
│
│ The argument "instance_type" is required, but no definition was found.
Since you are new to modules, there are a few things to consider:
Where are you defining the variables
How are you passing the values to variables required by the module
Ideally, you would pass the values to variables when calling the module:
module "EC2" {
source = "./EC2"
ami = "ami-gafadfafa"
instance_type = "t3.micro"
}
However, since you said you do not want to do that, then you need to assign the default values to variables on the module level. This can be achieved with:
variable "ec2_count" {
type = number
default = 1
}
variable "ami" {
type = string
default = "ami-gafadfafa"
}
variable "instance_type" {
type = string
default = "t3.micro"
}
This way, if you do not provide values when calling the module, the instance will be created and you are still leaving the option to provide the values if someone else were to use your module.
The third option is to have a separate variables.tf file in the root module (i.e., the same directory where you are calling the module from) and not define the values for variables. It would be basically a copy+paste of the file you have on the module level without defining any default values. Here is an example:
# variables.tf on the root module level
variable "ec2_count" {
type = number
}
variable "ami" {
type = string
}
variable "instance_type" {
type = string
}
However, in this case, it is impossible not to use the input variable names when calling the module. For this to work with providing the values on the CLI, the code would have to change to:
module "EC2" {
source = "./EC2"
ec2_count = var.ec2_count
ami = var.ami
instance_type = var.instance_type
}
There is no other way the module can know how to map the values you want it to consume without actually setting those variables to a certain value when calling the module.
Related
I have apigateway invoke_url that is unique and created by terraform, this unique url usually has form of
https://123abc.execute-api.us-east-1.amazonaws.com
That value has to go inside index.html for s3 object, I'm using template feature of terraform:
<!DOCTYPE html>
<html>
SOME CODE
var backend_url = "${backend_api_gateway}/voting"
some code
});
</html>
Trying with 'locals' in tf did not work:
locals {
backend_api_gateway = "${aws_apigatewayv2_stage.default.invoke_url}"
}
resource "aws_s3_object" "index_file_vote" {
bucket = aws_s3_bucket.frontend_vote.id
key = "index.html"
content = templatefile("./vote/index.html.tpl", {
backend_api_url = local.backend_api_gateway
})
depends_on = [aws_s3_bucket.frontend_vote, aws_apigatewayv2_api.main_apigateway]
}
It gives error:
Invalid function argument
│
│ on s3_bucket_vote.tf line 93, in resource "aws_s3_object" "index_file_vote":
│ 93: content = templatefile("./vote/index.html.tpl", {
│ 94: backend_api_url = local.backend_api_gateway
│ 95: })
│ ├────────────────
│ │ local.backend_api_gateway will be known only after apply│
│ Invalid value for "vars" parameter: vars map does not contain
│ key "backend_api_gateway", referenced at
│ ./vote/index.html.tpl:34,28-47.
Trying with vars, declaring future to be created apigateway's invoke url did not work:
variable "backend_api_gateway" {
type = string
default = "${aws_apigatewayv2_stage.default.invoke_url}" // error 'variables not allowed'
}
Since there are no modules involved, this should be easy to fix. First, the value assigned to the variable has to go away as it is not possible to use it that way. Second, you actually do not even need it. Third, you are using explicit dependencies, which is also not needed. Additionally, even though there is nothing wrong with using local variable, it is not needed. Here is the change required:
resource "aws_s3_object" "index_file_vote" {
bucket = aws_s3_bucket.frontend_vote.id
key = "index.html"
content = templatefile("./vote/index.html.tpl", {
backend_api_url = aws_apigatewayv2_stage.default.invoke_url
})
}
Background
Hi all,
Terraform newbie here.
I'm trying to poll an existing AWS certificate ARN and use that value in my ingress.tf file ingress object annotation.
As a first step, I tried to poll the value using the below terraform code:
# get-certificate-arn.tf
data "aws_acm_certificate" "test" {
domain = "test.example.com"
statuses = ["ISSUED"]
}
output "test" {
value = data.aws_acm_certificate.test.*.arn
description = "TESTING"
}
When I run this code, it gives me my certificate ARN back (YEY!) like the example below:
Changes to Outputs:
+ debugging = [
+ [
+ "arn:aws:acm:us-east-1:1234567890:certificate/12345abc-123-456-789def-12345etc",
]
Question:
I'd like to take this to the next level and use the output from above to feed the ingress annotations as shown by "???" in the code below:
# ingress.tf
resource "kubernetes_ingress_v1" "test_ingress" {
metadata {
name = "test-ingress"
namespace = "default"
annotations = {
"alb.ingress.kubernetes.io/certificate-arn" = ????
...etc...
}
}
I've tried:
"alb.ingress.kubernetes.io/certificate-arn" = data.aws_acm_certificate.test.*.arn
which doesn't work but I can't quite figure out how to pass the value from the get-certificate-arn.tf "data.aws_acm_certificate.test.arn" to the ingress.tf file.
The error I get is:
Error: Incorrect attribute value type
│
│ on ingress.tf line 6, in resource "kubernetes_ingress_v1" "test_ingress":
│ 6: annotations = {
│ 9: "alb.ingress.kubernetes.io/certificate-arn" = data.aws_acm_certificate.test.*.arn
[...truncated...]
│ 16: }
│ ├────────────────
│ │ data.aws_acm_certificate.test is object with 11 attributes
│
│ Inappropriate value for attribute "annotations": element "alb.ingress.kubernetes.io/certificate-arn": string required.
If anyone could advise how (IF?!) one can pass a variable to kubernetes_ingress_v1 'annotations' that would be amazing. I'm still learning Terraform and am still reviewing the fundamentals of passing variables around.
Have you tried maybe using :
"${data.aws_acm_certificate.test.arn}"
or alternatively
you can build the whole annotations block as a local
local{
ingress_annotations = {
somekey = somevalue
some_other_key = data.aws_acm_certificate.test.arn
}
and using it in the resource
annotations = local.ingress_annotations
I'm not that keen on TF
but you might need to have a more complex setup with a for loop.
local{
ingress_annotations = [
{key = value } ,{key = data.aws_acm_certificate.test.arn}
]
}
resource "kubernetes_ingress_v1" "test_ingress" {
metadata {
name = "test-ingress"
namespace = "default"
annotations = {for line in local.ingress_annotations : line.key => line.value
}
}
In the end, the solution was a typo in the data field, removing the "*" resolved the issue. For interests sake, if you want to combine two certificates to an ingress annotation you can join them as shown here[1]:
"alb.ingress.kubernetes.io/certificate-arn" = format("%s,%s",data.aws_acm_certificate.test.arn,data.aws_acm_certificate.test2.arn)
I am trying to get the network interface ids of a VPC endpoint using the data resource of aws_network_interface, the code for which looks like
resource "aws_vpc_endpoint" "api-gw" {
vpc_id = var.vpc_id
service_name = "com.amazonaws.${var.aws_region}.execute-api"
vpc_endpoint_type = "Interface"
security_group_ids = [aws_security_group.datashop_sg.id]
private_dns_enabled = true
subnet_ids = [data.aws_subnet.private-1.id]
}
data "aws_network_interface" "endpoint-api-gw" {
count = length(aws_vpc_endpoint.api-gw.network_interface_ids)
id = tolist(aws_vpc_endpoint.api-gw.network_interface_ids)[count.index]
}
I get the following error
Error: Invalid count argument
│
│ in data "aws_network_interface" "endpoint-api-gw":
│ count = length(aws_vpc_endpoint.api-gw.network_interface_ids)
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work
│ around this, use the -target argument to first apply only the resources that the count depends on.
I have also tried the for_each and it gives similar error of it is dependent on resources. I am running out of ideas. It would be of great if someone can help
The error is clear:
count = length(aws_vpc_endpoint.api-gw.network_interface_ids)
is only known after apply. You can't do this. count value must be known at plan time. You have to run your TF in two stages:
Execute TF with -target to deploy only aws_vpc_endpoint.api-gw using option.
Execute it again, to deploy the rest.
Otherwise, you have to re-factor you code, and fully eliminate the dependency of the count on aws_vpc_endpoint.api-gw.network_interface_ids.
I am working quite a bit with terraform in hopes of building an in-house solution to standing up infra. So far, I have written most of the terraform code and am building out the Ansible code for post-processing of the instances that are stood up. I am shuttling over the dynamic inventory from terraform to Ansible using this little Go app that can be found here, https://github.com/adammck/terraform-inventory. All of that works well.
As I get more into the terraform code, I am trying to use a ternary conditional operator on the ssh key for Linux instances. The goal is to "reuse" this resource on multiple instances.
My resource looks like this ..
resource "aws_key_pair" "key" {
key_name = var.ssh_key
count = var.create_ssh_key ? 1 : 0
public_key = file("~/.ssh/${var.ssh_key}")
}
I've included the [count.index] within the key argument here ...
resource "aws_instance" "linux" {
ami = var.linux_ami
instance_type = var.linux_instance_type
count = var.linux_instance_number
subnet_id = data.aws_subnet.itops_subnet.id
key_name = aws_key_pair.key[count.index].key_name
...
$ terraform validate comes back clean.
$ terraform plan -var-file response-file.tfvars -var "create_ssh_key=false" does not.
The std error is as follows ...
$ terraform plan -var-file response-file.tfvars -var "create_ssh_key=false"
╷
│ Error: Invalid index
│
│ on instances.tf line 16, in resource "aws_instance" "linux":
│ 16: key_name = aws_key_pair.key[count.index].key_name
│ ├────────────────
│ │ aws_key_pair.key is empty tuple
│ │ count.index is 0
│
│ The given key does not identify an element in this collection value.
What am I missing?
Thanks for the feedback!
if count in aws_key_pair is 0, there is no key to reference later on at all.
So you have to check for that and use null to eliminate key_name in such a case:
resource "aws_instance" "linux" {
ami = var.linux_ami
instance_type = var.linux_instance_type
count = var.linux_instance_number
subnet_id = data.aws_subnet.itops_subnet.id
key_name = var.create_ssh_key ? aws_key_pair.key[0].key_name : null
I think the root problem in your example here is that your resource "aws_key_pair" "key" block and your resource "aws_instance" "linux" block both have different values for count, and so therefore it isn't valid to use the count.index of the second to access an instance of the first.
In your case, you seem to have zero key pairs (it says aws_key_pair.key is empty tuple) but you have at least one EC2 instance, and so your expression is trying to access the zeroth instance of the key pair resource, which then fails because it doesn't have a zeroth instance.
If you are using Terraform v0.15 or later then you can use the one function to concisely handle both the zero- and one-instance cases of the key pair resource, like this:
resource "aws_instance" "linux" {
# ...
key_name = one(aws_key_pair.key[*].key_name)
# ...
}
Breaking this down into smaller parts:
aws_key_pair.key[*].key_name is a splat expression which projects from the list of objects aws_key_pair.key into a list of just strings containing key names, by accessing .key_name on each of the objects. In your case, because your resource can only have count 0 or 1, this'll be either a zero-or-one-element list of key names.
one then accepts both of those possible results as follows:
If it's a one-element list, it'll return just that single element no longer wrapped in a list.
If it's a zero-element list, it'll return null which, in a resource argument, means the same thing as not specifying that argument at all.
The effect, then, will be that if you have one key pair then it'll associate that key pair, but if you have no key pairs then it'll leave that argument unset and thus create an instance that doesn't have a key pair at all.
I have this problem. I am trying to create a network and subnetworks in gcp and I am using modules to do so.
So my directory structure looks like below:
modules
network
main.tf
variables.tf
subnetworks
main.tf
variables.tf
main.tf
terraform.tfvars
variables.tf
The folders inside the module where I have put the modules as the name suggests.
And main.tf inside the network looks like this:
# module to create the subnet
resource "google_compute_network" "network" {
name = var.network_name
auto_create_subnetworks = "false"
}
And the main.tf inside the subnetworks looks like this:
resource "google_compute_subnetwork" "public-subnetwork" {
network = // how to refer the network name here?
...
}
in normal scenarios when we have a one single terraform file for every resource (when we don't use modules), it would look like this:
# create vpc
resource "google_compute_network" "kubernetes-vpc" {
name = "kubernetes-vpc"
auto_create_subnetworks = "false"
}
resource "google_compute_subnetwork" "master-sub" {
network = google_compute_network.kubernetes-vpc.name
...
}
We can directly call the google_compute_network.kubernetes-vpc.name for the value of network when creating the google_compute_subnetwork. But now since I am using modules, how can I achieve this?
Thanks.
You can create a outputs.tf file in the network.
Inside the outputs.tf file you can declare a resource like this.
output "google_compute_network_name" {
description = "The name of the network"
value = google_compute_network.network.name
}
Now inside the subnetwork module you can use a standard variable to receive the value of the network name.
resource "google_compute_subnetwork" "public-subnetwork" {
// receive network name as variable
network = var.network_name
...
}
And where you use the modules network and subnetworks in main.tf, from the roof folder (I assume) you can pass the output variable from module network to the subnetwork module.
Example:
module "root_network" {
source = "./modules/network"
}
module "subnetwork" {
source = "./modules/subnetworks"
// input variable for subnetwork from the output of the network
network_name = module.root_network.google_compute_network_name
}
If you want to read more about output variables you can find the documentation here.