My problem is that I can't dynamically connect the created disks to the vps. The google_compute_disk_attach module cannot be used
Here is my code
What is the correct way in this situation?
resource "google_compute_instance" "vps" {
name = var.server_name
description = var.server_description
machine_type = var.server_type
zone = var.server_datacenter
deletion_protection = var.server_delete_protection
labels = var.server_labels
metadata = var.server_metadata
tags = var.server_tags
boot_disk {
auto_delete = false
initialize_params {
size = var.boot_volume_size
type = var.boot_volume_type
image = var.boot_volume_image
labels = var.boot_volume_labels
}
}
dynamic "attached_disk" {
for_each = { for vol in var.volumes : vol.volume_name => vol }
content {
source = element(var.volumes[*].volume_name, 0)
}
}
network_interface {
subnetwork = var.server_network
access_config {
nat_ip = google_compute_address.static_ip.address
}
}
resource "google_compute_disk" "volume" {
for_each = { for vol in var.volumes : vol.volume_name => vol }
name = each.value.volume_name
type = each.value.volume_type
size = each.value.volume_size
zone = var.server_datacenter
labels = each.value.volume_labels
}
volumes variables
volumes = [{
volume_name = "v3-postgres-saga-import-test-storage"
volume_size = "40"
volume_type = "pd-ssd"
volume_labels = {
environment = "production"
project = "v3"
type = "storage"
}
}, {
volume_name = "volume-vpstest2"
volume_size = "20"
volume_type = "pd-ssd"
volume_labels = {
environment = "production"
project = "v2"
type = "storage"
}
}]
if do something like that - error
source = google_compute_disk.volume[*].self_link
This object does not have an attribute named "self_link".
Since you've used for_each in google_compute_disk.volume, it will be a map, not a list. Thus you can list all self_link as follows:
source = values(google_compute_disk.volume)[*].self_link
You can also use the volume variable directly as map instead of Array :
variables.tf file :
variable "volumes" {
default = {
postgres_saga = {
volume_name = "v3-postgres-saga-import-test-storage"
volume_size = "40"
volume_type = "pd-ssd"
volume_labels = {
environment = "production"
project = "v3"
type = "storage"
}
},
volume_vpstest2 = {
volume_name = "volume-vpstest2"
volume_size = "20"
volume_type = "pd-ssd"
volume_labels = {
environment = "production"
project = "v2"
type = "storage"
}
}
}
}
Instead of variable, you can also use a local variable from json configuration. Example of structure of Terraform module :
project
module
main.tf
locals.tf
resource
volumes.json
volumes.json file
{
"volumes": {
"postgres_saga" : {
"volume_name" : "v3-postgres-saga-import-test-storage"
"volume_size" : "40"
"volume_type" : "pd-ssd"
"volume_labels" : {
"environment" : "production"
"project" : "v3"
"type" : "storage"
}
},
"volume_vpstest2" : {
"volume_name" : "volume-vpstest2"
"volume_size" : "20"
"volume_type" : "pd-ssd"
"volume_labels" : {
"environment" : "production"
"project" : "v2"
"type" : "storage"
}
}
}
}
locals.tf file
locals {
tables = jsondecode(file("${path.module}/resource/volumes.json"))["volumes"]
}
main.tf file :
resource "google_compute_instance" "vps" {
name = var.server_name
description = var.server_description
machine_type = var.server_type
zone = var.server_datacenter
deletion_protection = var.server_delete_protection
labels = var.server_labels
metadata = var.server_metadata
tags = var.server_tags
boot_disk {
auto_delete = false
initialize_params {
size = var.boot_volume_size
type = var.boot_volume_type
image = var.boot_volume_image
labels = var.boot_volume_labels
}
}
dynamic "attached_disk" {
for_each = [
var.volumes
# local.volumes
]
content {
source = attached_disk.value["volume_name"]
}
}
network_interface {
subnetwork = var.server_network
access_config {
nat_ip = google_compute_address.static_ip.address
}
}
}
resource "google_compute_disk" "volume" {
for_each = var.volumes
# local.volumes
name = each.value["volume_name"]
type = each.value["volume_type"]
size = each.value["volume_size"]
zone = var.server_datacenter
labels = each.value["volume_labels"]
}
With a map, you can directly use foreach without any transformation on google_compute_disk/volume resource.
You can also use this map in a dynamic bloc.
Related
My error
enter image description here
Invalid value for "path" parameter: no file exists at "cis-userdata.sh"; this function works only with files that are distributed as part of the configuration source code, so if this file will be created by a resource in this configuration you must instead obtain this result from an attribute of that resource.
My files:
enter image description here
My code:
EC2.tf
# ------------------------------------------------------------------------------------------------------------
# ------------------------------- EC2 Module with Latest Ubuntu AMI ------------------------------------------
# ------------------------------ No Network Interfaces. Imports Only -----------------------------------------
# ------------------------------------------------------------------------------------------------------------
resource "aws_instance" "ec2" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
iam_instance_profile = var.iam_instance_profile
monitoring = var.monitoring
disable_api_termination = var.disable_api_termination
ebs_optimized = true
key_name = var.key_name
vpc_security_group_ids = var.security_groups
subnet_id = var.subnet_id
user_data = templatefile(var.template, {
HOSTNAME = var.name,
linuxPlatform = "",
isRPM = "",
})
metadata_options {
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
}
tags = {
Creator = var.creator
"Cost Center" = var.cost_center
Stack = var.stack
Name = var.name
ControlledByAnsible = var.controlled_by_ansible
ConfigAnsible = var.configansible
}
root_block_device {
delete_on_termination = true
encrypted = true
kms_key_id = var.kms_key_arn # Arn instead of id to avoid forced replacement.
volume_size = 16
tags = {
Creator = var.creator
"Cost Center" = var.cost_center
Stack = var.stack
Name = var.name
}
}
lifecycle {
ignore_changes = [
ami,
user_data,
root_block_device,
]
}
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["xxxx"] # Canonical
}
variables.tf
variable "name" {
default = "xxx-prod"
}
variable "instance_type" {
default = "m5.large"
}
variable "public_ip" {
default = false
}
variable "instance_id" {
default = ""
}
variable "stateManager" {
default = ""
}
variable "iam_instance_profile" {
default = "infra-" # Required for systems manager
}
variable "security_groups" {
default = ["sg-xxxx"] #
}
variable "subnet_id" {
default = "subnet-xxxx"
}
variable "availability_zone" {
default = "us-east-1a"
}
variable "disable_api_termination" {
default = "true"
}
variable "kms_key_arn" {
default = "arn:aws:kms:us-east-1:xxxxx:key/xxxx"
}
variable "creator" {
default = "xxx#xxx.com"
}
variable "cost_center" {
default = "xxx"
}
variable "stack" {
default = "Production"
}
variable "controlled_by_ansible" {
default = "False"
}
variable "country" {
default = ""
}
variable "ec2_number" {
default = "01"
}
variable "monitoring" {
default = true
}
variable "device" {
default = "/dev/xvda"
}
variable "template" {
default = ("cis-userdata.sh")
}
variable "key_name" {
default = "xxx"
}
variable "image_id" {
default = "ami-xxx"
}
variable "volume_size" {
default = 16
}
resource "aws_instance" "ec2" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
iam_instance_profile = var.iam_instance_profile
monitoring = var.monitoring
disable_api_termination = var.disable_api_termination
ebs_optimized = true
key_name = var.key_name
vpc_security_group_ids = var.security_groups
subnet_id = var.subnet_id
user_data = templatefile("${path.module}/${var.template}", {
HOSTNAME = var.name,
linuxPlatform = "",
isRPM = "",
})
metadata_options {
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
}
tags = {
Creator = var.creator
"Cost Center" = var.cost_center
Stack = var.stack
Name = var.name
ControlledByAnsible = var.controlled_by_ansible
ConfigAnsible = var.configansible
}
root_block_device {
delete_on_termination = true
encrypted = true
kms_key_id = var.kms_key_arn # Arn instead of id to avoid forced replacement.
volume_size = 16
tags = {
Creator = var.creator
"Cost Center" = var.cost_center
Stack = var.stack
Name = var.name
}
}
lifecycle {
ignore_changes = [
ami,
user_data,
root_block_device,
]
}
}
cis-userdata.sh would contain the user_data for setting up the AMI as follows.
#!/bin/bash
sudo apt update
cd /home/ubuntu
sudo apt-get purge 'apache2*'
sudo apt install -y apache2
sudo ufw allow 'Apache Full'
sudo systemctl enable apache2
sudo systemctl start apache2
sudo echo “Hello World from $(hostname -f)” > /var/www/html/index.html
For more, visit this answer https://stackoverflow.com/a/62599263/11145307
I am writing terraform script to automate the provision of acm for domains, the issue that I am facing is how can I merge the domain and subject_alternative_names like it should pick first domain from domain_name and merge it with first block in subject_alternative_name and go on.
Variable.tf
variable "domain_name" {
description = "Configuration for alb settings"
default = [
"domain.com",
"helloworld.com",
"helloworld2.com",
]
}
variable "subject_alternative_names" {
description = "subject_alternative_names"
default = [ {
domain.com = {
"domain.com",
"domain2.com",
"domain3.com",
},
helloworld.com = {
"helloworld1.com",
"helloworld2.com"
},
hiworld.com = {
"hiworld1.com",
"hiworld2.com"
}
}]
}
variable "region" {
description = "name of the region"
default = "us-east-1"
}
variable "validation_method" {
description = "name of the region"
default = "DNS"
}
variable "tags" {
description = "name of the region"
default = "Test"
}
working variable.tf
variable "domain_name" {
description = "Configuration for alb settings"
default = [
"domain.com",
"helloworld.com",
"helloworld2.com",
"helloworld1.com",
"helloworld3.com",
]
}
variable "subject_alternative_names"{
description = "subject_alternative_names"
default = [
"domain.com",
"helloworld.com",
"helloworld2.com",
"helloworld1.com",
"helloworld3.com",
]
}
variable "region" {
description = "name of the region"
default = "us-east-1"
}
variable "validation_method" {
description = "name of the region"
default = "DNS"
}
variable "tags" {
description = "name of the region"
default = "Test"
}
main.tf
module "acm" {
count = length(var.domain_name)
source = "./modules/acm"
domain_name = var.domain_name[count.index]
validation_method = var.validation_method
tags = var.tags
subject_alternative_names = var.subject_alternative_names
}
resource.tf
variable "domain_name" {
default = ""
description = "Nmae of the domain"
}
variable "validation_method" {
default = ""
description = "Validation method DNS or EMAIL"
}
variable "tags" {
default = ""
description = "tags for the ACM certificate"
}
variable "subject_alternative_names" {
default = ""
description = "subject_alternative_names"
}
resource "aws_acm_certificate" "acm_cert" {
domain_name = var.domain_name
validation_method = var.validation_method
subject_alternative_names = var.subject_alternative_names
lifecycle {
create_before_destroy = true
}
tags = {
Name = var.tags
}
}
The easiest way would be to use a single map:
variable "domain_name_with_alternate_names" {
default = {
"domain.com" = [
"domain.com",
"domain2.com",
"domain3.com",
],
"helloworld.com" = [
"helloworld1.com",
"helloworld2.com"
],
"hiworld.com" = [
"hiworld1.com",
"hiworld2.com"
],
"hiwodd4.com" = []
}
}
module "acm" {
for_each = var.domain_name_with_alternate_names
source = "./modules/acm"
domain_name = each.key
validation_method = var.validation_method
tags = var.tags
subject_alternative_names = each.value
}
I am working on creating an ElasticSearch cluster using Terraform. I am not able to get the subnet-ids for a given VPC, with aws_subnet_ids. The return is always null. What am I doing wrong?
Code :
provider "aws" {
region = "eu-central-1"
shared_credentials_file = "${pathexpand("~/.aws/credentials")}"
}
variable "domain" {
default = "tf-test"
}
data "aws_vpc" "selected" {
tags = {
Name = var.vpc
}
}
data "aws_subnet_ids" "selected" {
vpc_id = "${data.aws_vpc.selected.id}"
}
resource "aws_elasticsearch_domain" "es" {
domain_name = "${var.domain}"
elasticsearch_version = "6.3"
cluster_config {
instance_type = "m4.large.elasticsearch"
}
vpc_options {
subnet_ids = [
"${data.aws_subnet_ids.selected.ids[0]}",
"${data.aws_subnet_ids.selected.ids[1]}",
]
}
Output :
terraform plan :
on main.tf line 55, in resource "aws_elasticsearch_domain" "es":
55: "${data.aws_subnet_ids.selected.ids[0]}",
This value does not have any indices.
Error: Invalid index
on main.tf line 56, in resource "aws_elasticsearch_domain" "es":
56: "${data.aws_subnet_ids.selected.ids[1]}",
This value does not have any indices.
Update
If I print the subnet id's without index, I am getting them :
Solved :
subnet_id = "${element(module.vpc.public_subnets, 0)}"
VPC created in this manner :
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.6.0"
name = var.vpc_name
cidr = var.vpc_cidr
azs = data.aws_availability_zones.available.names
private_subnets = [var.public_subnet_1, var.public_subnet_2, var.public_subnet_3]
public_subnets = [var.private_subnet_1, var.private_subnet_2, var.private_subnet_3]
enable_nat_gateway = var.enable_nat_gateway
single_nat_gateway = var.single_nat_gateway
enable_dns_hostnames = var.enable_dns_hostname
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
"name" = var.public_subnet_name
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
"name" = var.private_subnet_name
}
}
AWS EC2 instance creation is failing while creating a network interface in the aws_instance section. The configuration is following configuration as defined in Terraform Network Interfaces
Configuration.
On removing the network block the configuration works seamlessly. With network block the following error was logged
"Error: Error launching source instance: Unsupported: The requested configuration is currently not supported. Please check the documentation for supported configurations."
variable "aws_region" {}
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "vpc_cidr_block" {}
variable "environment" {}
variable "applicationtype" {}
variable "subnet_cidr_block" {}
variable "amiid" {}
variable "instancetype" {}
variable "bucketname" {}
variable "publickey-fe" {}
variable "publickey-be" {}
provider "aws" {
profile = "default"
region = "${var.aws_region}"
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
}
data "aws_availability_zones" "availability" {
state = "available"
}
resource "aws_vpc" "sitespeed_vpc" {
cidr_block = "${var.vpc_cidr_block}"
instance_tenancy = "dedicated"
tags = {
env = "${var.environment}"
application = "${var.applicationtype}"
Name = "site-speed-VPC"
}
}
resource "aws_subnet" "sitespeed_subnet" {
vpc_id = "${aws_vpc.sitespeed_vpc.id}"
cidr_block = "${var.subnet_cidr_block}"
availability_zone = "${data.aws_availability_zones.availability.names[0]}"
tags = {
env = "${var.environment}"
application = "${var.applicationtype}"
Name = "site-speed-Subnet"
}
}
resource "aws_network_interface" "sitespeed_frontend_NIC" {
subnet_id = "${aws_subnet.sitespeed_subnet.id}"
private_ips = ["192.168.10.100"]
tags = {
env = "${var.environment}"
application = "${var.applicationtype}"
Name = "site-speed-frontend-nic"
}
}
resource "aws_network_interface" "sitespeed_backend_NIC" {
subnet_id = "${aws_subnet.sitespeed_subnet.id}"
private_ips = ["192.168.10.110"]
tags = {
env = "${var.environment}"
application = "${var.applicationtype}"
Name = "site-speed-backend-nic"
}
}
resource "aws_key_pair" "sitespeed_front_key" {
key_name = "site_speed_front_key"
public_key = "${var.publickey-fe}"
}
resource "aws_key_pair" "sitespeed_back_key" {
key_name = "site_speed_back_key"
public_key = "${var.publickey-be}"
}
resource "aws_instance" "sitespeed_front" {
ami = "ami-00942d7cd4f3ca5c0"
instance_type = "t2.micro"
key_name = "site_speed_front_key"
availability_zone = "${data.aws_availability_zones.availability.names[0]}"
network_interface {
network_interface_id = "${aws_network_interface.sitespeed_frontend_NIC.id}"
device_index = 0
}
tags = {
env = "${var.environment}"
application = "${var.applicationtype}"
Name = "site-speed-frontend-server"
public = "yes"
}
}
resource "aws_instance" "sitespeed_backend" {
ami = "ami-00942d7cd4f3ca5c0"
instance_type = "t2.micro"
key_name = "site_speed_back_key"
network_interface {
network_interface_id = "${aws_network_interface.sitespeed_backend_NIC.id}"
device_index = 0
}
tags = {
env = "${var.environment}"
application = "${var.applicationtype}"
Name = "site-speed-backend-server"
public = "No"
}
}
resource "aws_s3_bucket" "b" {
bucket = "${var.bucketname}"
acl = "private"
tags = {
env = "${var.environment}"
application = "${var.applicationtype}"
}
}
The issue was due to the Terraform Version. Following is the updated script that supports Terraform V.0.12.16 to create an EC2 Instance on AWS.
// Variable Definition
variable "aws_region" {}
variable "aws_vpc_cidr_block" {}
variable "aws_subnet_cidr_block" {}
variable "aws_private_ip_fe" {}
variable "aws_Name" {}
variable "aws_Application" {}
variable "aws_ami" {}
variable "aws_instance_type" {}
// Provider Definition
provider "aws" {
version = "~> 2.40"
region = var.aws_region
}
// Adds a VPC
resource "aws_vpc" "aws_ec2_deployment_test-vpc" {
cidr_block = var.aws_vpc_cidr_block
tags = {
Name = join("-", [var.aws_Name, "vpc"])
Application = var.aws_Application
}
}
//Adds a subnet
resource "aws_subnet" "aws_ec2_deployment_test-subnet" {
vpc_id = aws_vpc.aws_ec2_deployment_test-vpc.id
cidr_block = var.aws_subnet_cidr_block
availability_zone = join("", [var.aws_region, "a"])
tags = {
Name = join("-", [var.aws_Name, "subnet"])
Application = var.aws_Application
}
}
//Adds a Network Interface
resource "aws_network_interface" "aws_ec2_deployment_test-fe" {
subnet_id = aws_subnet.aws_ec2_deployment_test-subnet.id
private_ips = [ var.aws_private_ip_fe ]
tags = {
Name = join("-", [var.aws_Name, "network-interface-fe"])
Application = var.aws_Application
}
}
//Adds an EC2 Instance
resource "aws_instance" "aws_ec2_deployment_test-fe"{
ami = var.aws_ami
instance_type = var.aws_instance_type
network_interface {
network_interface_id = aws_network_interface.aws_ec2_deployment_test-fe.id
device_index = 0
}
tags = {
Name = join("-", [var.aws_Name, "fe-ec2"])
Application = var.aws_Application
}
}
// Print Output Values
output "aws_ec2_deployment_test-vpc" {
description = "CIDR Block for the VPC: "
value = aws_vpc.aws_ec2_deployment_test-vpc.cidr_block
}
output "aws_ec2_deployment_test-subnet" {
description = "Subnet Block: "
value = aws_subnet.aws_ec2_deployment_test-subnet.cidr_block
}
output "aws_ec2_deployment_test-private-ip" {
description = "System Private IP: "
value = aws_network_interface.aws_ec2_deployment_test-fe.private_ip
}
output "aws_ec2_deployment_test-EC2-Details" {
description = "EC2 Details: "
value = aws_instance.aws_ec2_deployment_test-fe.public_ip
}
Gist link to the solution
I'm trying to create a terraform script which takes user input and executes accordingly. I basically want to ask if the user wants static IP in Google cloud platform, if yes, then stitch the resource "google_compute_instance" accordingly, otherwise, let it go.
Sharing the code I have written:
variable "create_eip" {
description = "Enter 1 for true, 0 for false"
}
resource "google_compute_address" "external" {
count = "${var.create_eip}"
name = "external-ip",
address_type = "EXTERNAL",
}
resource "google_compute_instance" "compute-engine" {
name = "random",
machine_type = "f1-micro",
boot_disk {
initialize_params {
size = "10",
type = "pd-ssd",
image = "${data.google_compute_image.image.self_link}"
}
}
network_interface {
subnetwork = "default",
access_config {
nat_ip = "${google_compute_address.external.address}"
}
}
}
The error I'm getting here is when the user puts 0 as input the code control still goes to "nat_ip = "${google_compute_address.external.address}""
because of which I get this error:
google_compute_instance.compute-engine: Resource 'google_compute_address.external' not found for variable
'google_compute_address.external.address'.
I also tried it this way by replacing
nat_ip = "${var.create_eip == "1" ? "${google_compute_address.external.address}" : ""}"
(if create_ip = 1, execute "google_compute_address.external.address", else do nothing).
But it is not working as expected.
That's an issue with terraform...
You can't really do an if for something other than the count.
You could try something like that as you can't put condition inside a resource for now:
variable "create_eip" {
description = "Enter 1 for true, 0 for false"
}
resource "google_compute_address" "external" {
count = "${var.create_eip}"
name = "external-ip",
address_type = "EXTERNAL",
}
resource "google_compute_instance" "compute-engine-ip" {
count = "${var.create_eip == 1 ? 1 : 0}"
name = "random",
machine_type = "f1-micro",
boot_disk {
initialize_params {
size = "10",
type = "pd-ssd",
image = "${data.google_compute_image.image.self_link}"
}
}
network_interface {
subnetwork = "default",
access_config {
nat_ip = "${google_compute_address.external.address}"
}
}
}
resource "google_compute_instance" "compute-engine" {
count = "${var.create_eip == 1 ? 0 : 1}"
name = "random",
machine_type = "f1-micro",
boot_disk {
initialize_params {
size = "10",
type = "pd-ssd",
image = "${data.google_compute_image.image.self_link}"
}
}
network_interface {
subnetwork = "default",
access_config {
}
}
}
This code will create a compute instance using the created ip if the variable value is at one, in the other case it will create an ip, you could also add a lifecycle if you want to keep the same ip on the compute_address resource:
lifecycle = {
ignore_changes = ["node_pool"]
}