I am using terraform aws_rds_cluster and aws_rds_cluster_instances modules to provision the AWS RDS cluster (mysql). This creates the cluster with one writer and two read replicas. In the output.tf, I need to get the endpoint of the RDS writer instance.
output "rds_writer_instance_endpoint {
value = aws_rds_cluster.instances.*.endpoint
}
I got all the endpoints for the three instances "aws_rds_cluster.instances.*.endpoint". How to retrieve only the writer endpoint?
your instance objects will be returning with an attribute of writer. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster_instance#attributes-reference
writer – Boolean indicating if this instance is writable. False indicates this instance is a read replica.
You can use this attribute to filter your list and return only objects that have this value set to true. Assuming that only one instance can be the writter I then use the one function of terraform to set the output as the value of the list.
output "rds_writer_instance_endpoint {
value = one([for instance in aws_rds_cluster.instances]: var.endpoint if var.writer])
}
As an example of this in action, I have created a list of variables that represent like your instances resource and apply the above pattern to it.
variable "instances" {
type = list(object({
endpoint = string
writer = bool
}))
default = [
{
endpoint = "http:localhost:8080"
writer = false
},
{
endpoint = "http:localhost:8081"
writer = true
},
{
endpoint = "http:localhost:8082"
writer = false
}
]
}
output "all_out" {
value = var.instances.*.endpoint
}
output "writer" {
value = one([for instance in var.instances: instance.endpoint if instance.writer])
}
OUTPUT
Outputs:
all_out = tolist([
"http:localhost:8080",
"http:localhost:8081",
"http:localhost:8082",
])
writer = "http:localhost:8081"
Related
I am using the following script to query a particular instance. There will be only one running instance with the given name. It is possible that another instance with the same name may exists but in different instance state.
How do I filter instance on instance's state so it only retrieves instance that is running state?
data "aws_instance" "ec2" {
filter {
name = "tag:Name"
values = ["dev-us-west-2-myinstance"]
}
}
Currently I get the following error
multiple EC2 Instances matched; use additional constraints to reduce
matches to a single EC2 Instance
The terraform documentation, links to the AWS documentation for the describe-instances filter.
That documentation indicates you should do the following:
data "aws_instance" "ec2" {
filter {
name = "tag:Name"
values = ["dev-us-west-2-myinstance"]
}
filter {
name = "instance-state-name"
values = ["running"]
}
}
I have a tf script for provisioning a Cloud SQL instance, along with a couple of dbs and an admin user. I have renamed the instance, hence a new instance was created but terraform is encountering issues when it comes to deleting the old one.
Error: Error, failed to delete instance because deletion_protection is set to true. Set it to false to proceed with instance deletion
I have tried setting the deletion_protection to false but I keep getting the same error. Is there a way to check which resources need to have the deletion_protection set to false in order to be deleted?
I have only added it to the google_sql_database_instance resource.
My tf script:
// Provision the Cloud SQL Instance
resource "google_sql_database_instance" "instance-master" {
name = "instance-db-${random_id.random_suffix_id.hex}"
region = var.region
database_version = "POSTGRES_12"
project = var.project_id
settings {
availability_type = "REGIONAL"
tier = "db-f1-micro"
activation_policy = "ALWAYS"
disk_type = "PD_SSD"
ip_configuration {
ipv4_enabled = var.is_public ? true : false
private_network = var.network_self_link
require_ssl = true
dynamic "authorized_networks" {
for_each = toset(var.is_public ? [1] : [])
content {
name = "Public Internet"
value = "0.0.0.0/0"
}
}
}
backup_configuration {
enabled = true
}
maintenance_window {
day = 2
hour = 4
update_track = "stable"
}
dynamic "database_flags" {
iterator = flag
for_each = var.database_flags
content {
name = flag.key
value = flag.value
}
}
user_labels = var.default_labels
}
deletion_protection = false
depends_on = [google_service_networking_connection.cloudsql-peering-connection, google_project_service.enable-sqladmin-api]
}
// Provision the databases
resource "google_sql_database" "db" {
name = "orders-placement"
instance = google_sql_database_instance.instance-master.name
project = var.project_id
}
// Provision a super user
resource "google_sql_user" "admin-user" {
name = "admin-user"
instance = google_sql_database_instance.instance-master.name
password = random_password.user-password.result
project = var.project_id
}
// Get latest CA certificate
locals {
furthest_expiration_time = reverse(sort([for k, v in google_sql_database_instance.instance-master.server_ca_cert : v.expiration_time]))[0]
latest_ca_cert = [for v in google_sql_database_instance.instance-master.server_ca_cert : v.cert if v.expiration_time == local.furthest_expiration_time]
}
// Get SSL certificate
resource "google_sql_ssl_cert" "client_cert" {
common_name = "instance-master-client"
instance = google_sql_database_instance.instance-master.name
}
Seems like your code going to recreate this sql-instance. But your current tfstate file contains an instance-code with true value for deletion_protection parameter. In this case, you need first of all change value of this parameter to false manually in tfstate file or by adding deletion_protection = true in the code with running terraform apply command after that (beware: your code shouldn't do a recreation of the instance). And after this manipulations, you can do anything with your SQL instance
You will have to set deletion_protection=false, apply it and then proceed to delete.
As per the documentation
On newer versions of the provider, you must explicitly set deletion_protection=false (and run terraform apply to write the field to state) in order to destroy an instance. It is recommended to not set this field (or set it to true) until you're ready to destroy the instance and its databases.
Link
Editing Terraform state files directly / manually is not recommended
If you added deletion_protection to the google_sql_database_instance after the database instance was created, you need to run terraform apply before running terraform destroy so that deletion_protection is set to false on the database instance.
We are spinning up G4 instances in AWS through Terraform and often encounter issues where one or two of the AZs in the given Region don't support G4 Instance type.
As of now I have hardcoded our TF configuration as per below where I am creating a map of Region and AZs as "azs" variable. From this map I can spin up clusters in targeted AZs of the Region where we have G4 Instance support.
I am using aws command line mentioned in this AWS article to find which AZs are supported in a given Region and updating our "azs" variable as we expand to other Regions.
variable "azs" {
default = {
"us-west-2" = "us-west-2a,us-west-2b,us-west-2c"
"us-east-1" = "us-east-1a,us-east-1b,us-east-1e"
"eu-west-1" = "eu-west-1a,eu-west-1b,eu-west-1c"
"eu-west-2" = "eu-west-2a,eu-west-2b,eu-west-2c"
"eu-west-3" = "eu-west-3a,eu-west-3c"
}
However the above approach required human intervention and updates frequently (If AWS adds support to non-supported AZs in a given region later on )
There is this stack overflow question where User is trying to do the same thing however he can use the fallback instance type lets say if any of the AZs are not supported for given instance type.
In my use-case , I can't use any other fall back instance type since our app-servers only runs on G4.
I have tried to use the workaround mentioned as an Answer in the above stack overflow question however its failing with the following error message.
Error: no EC2 Instance Type Offerings found matching criteria; try
different search
on main.tf line 8, in data "aws_ec2_instance_type_offering"
"example": 8: data "aws_ec2_instance_type_offering" "example" {
I am using the TF config as below where my preferred_instance_types is g4dn.xlarge.
provider "aws" {
version = "2.70"
}
data "aws_availability_zones" "all" {
state = "available"
}
data "aws_ec2_instance_type_offering" "example" {
for_each = toset(data.aws_availability_zones.all.names)
filter {
name = "instance-type"
values = ["g4dn.xlarge"]
}
filter {
name = "location"
values = [each.value]
}
location_type = "availability-zone"
preferred_instance_types = ["g4dn.xlarge"]
}
output "foo" {
value = { for az, details in data.aws_ec2_instance_type_offering.example : az => details.instance_type }
}
I would like to know how to handle this failure as Terraform is not able to find the g4 instance type in one of the AZs of a given region and failing.
Is there any Terraform Error handling I can do to by pass this error for now and get the supported AZs as an Output ?
I had checked that other question you mentioned earlier, but i could never get the output correctly. Thanks to #ydaetskcoR for this response in that post - I could learn a bit and get my loop working.
Here is one way to accomplish what you are looking for... Let me know if it works for you.
Instead of "aws_ec2_instance_type_offering", use "aws_ec2_instance_type_offerings" ... (there is a 's' in the end. they are different Data Sources...
I will just paste the code here and assume you will be able to decode the logic. I am filtering for one specific instance type and if its not supported, instance_types will be black and i make a list of AZ thats does not do not have blank values.
variable "az" {
default="us-east-1"
}
variable "my_inst" {
default="g4dn.xlarge"
}
data "aws_availability_zones" "example" {
filter {
name = "opt-in-status"
values = ["opt-in-not-required"]
}
}
data "aws_ec2_instance_type_offerings" "example" {
for_each=toset(data.aws_availability_zones.example.names)
filter {
name = "instance-type"
values = [var.my_inst]
}
filter {
name = "location"
values = ["${each.key}"]
}
location_type = "availability-zone"
}
output "az_where_inst_avail" {
value = keys({ for az, details in data.aws_ec2_instance_type_offerings.example :
az => details.instance_types if length(details.instance_types) != 0 })
}
The output will look like below. us-east-1e does not have the instance type and its not there in the Output. Do test a few cases to see if it works everytime.
Outputs:
az_where_inst_avail = [
"us-east-1a",
"us-east-1b",
"us-east-1c",
"us-east-1d",
"us-east-1f",
]
I think there's a cleaner way. The data source already filters for the availability zone based off of the given filter. There is an attribute -> locations that will produce a list of the desired location_type.
provider "aws" {
region = var.region
}
data "aws_ec2_instance_type_offerings" "available" {
filter {
name = "instance-type"
values = [var.instance_type]
}
location_type = "availability-zone"
}
output "azs" {
value = data.aws_ec2_instance_type_offerings.available.locations
}
Where the instance_type is t3.micro and region is us-east-1, this accurately produces:
azs = tolist([
"us-east-1d",
"us-east-1a",
"us-east-1c",
"us-east-1f",
"us-east-1b",
])
You don't need to feed it a list of availability zones because it already gets those from the supplied region.
I am trying to spin an AWS EC2 Spot instance with some validity (For example, Spot created should be accessible for 2hours or 3hours and the Spot instance should be terminated).
I am able to spin the spot instance using the below code but unable to set the duration/validity of the created Spot instance.
I am sharing my Terraform code (both main.tf and variable.tf) by which I am trying to spin a spot instance.
I tried to set the the expiry of the Spot instance using the below 2 lines of code in my main.tf file but did't work
valid_until = "${var.spot_instance_validity}"
terminate_instances_with_expiration = true
For valid_until , I couldn't able to give the RFC3339 format or YYYY-MM-DDTHH:MM:SSZ - calculating for 2 hour from the time when I spin the Spot instance. So removed the above 2 lines of code from my main.tf file
Below is the my main.tf file used to spin the spot instance
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "aws_spot_instance_request" "dev-spot" {
ami = "${var.ami_web}"
instance_type = "t3.medium"
subnet_id = "subnet-xxxxxx"
associate_public_ip_address = "true"
key_name = "${var.key_name}"
vpc_security_group_ids = ["sg-xxxxxxx"]
spot_price = "${var.linux_spot_price}"
wait_for_fulfillment = "${var.wait_for_fulfillment}"
spot_type = "${var.spot_type}"
instance_interruption_behaviour = "${var.instance_interruption_behaviour}"
block_duration_minutes = "${var.block_duration_minutes}"
tags = {
Name = "dev-spot"
}
}
Below is the variable file "variable.tf"
variable "access_key" {
default = ""
}
variable "secret_key" {
default = ""
}
variable "region" {
default = "us-west-1"
}
variable "key_name" {
default = "win-key"
}
variable "windows_spot_price" {
type = "string"
default = "0.0309"
}
variable "linux_spot_price" {
type = "string"
default = "0.0125"
}
variable "wait_for_fulfillment" {
default = false
}
variable "spot_type" {
type = "string"
default = "one-time"
}
variable "instance_interruption_behaviour" {
type = "string"
default = "terminate"
}
variable "block_duration_minutes" {
type = "string"
default = "0"
}
variable "ami_web" {
default = "ami-xxxxxxxxxxxx"
}
The created Spot instance should have an validity to set as 1 hour or 2 hour which I can call from variable.tf file so the Spot instance should be Terminated by 1 hour or 2 hours (or Spot instance request should be cancelled)
Is there a way I can Spin aws ec2 Spot instance with expiry ?
It is not possible to schedule instances for termination.
However, you can use CloudWatch Events and Lambda to create your own instance termination logic. You need to create a scheduled event in Terraform according to your variable (valid_until), which invokes a Lambda function to terminate the instance.
AWS also has a solution called Instance Scheduler. You can simply attach tags to your spot instances to create start/stop schedules.
However, you should change instance stop behaviour in this case, which is by default shutdown, to terminate. Thus, your instances will be terminated when stopped. This can be achieved by using aws_instance.instance_initiated_shutdown_behavior argument in Terraform.
I have 3 different version of an AMI, for 3 different nodes in a cluster.
data "aws_ami" "node1"
{
# Use the most recent AMI that matches the pattern below in 'values'.
most_recent = true
filter {
name = "name"
values = ["AMI_node1*"]
}
filter {
name = "tag:version"
values = ["${var.node1_version}"]
}
}
data "aws_ami" "node2"
{
# Use the most recent AMI that matches the pattern below in 'values'.
most_recent = true
filter {
name = "name"
values = ["AMI_node2*"]
}
filter {
name = "tag:version"
values = ["${var.node2_version}"]
}
}
data "aws_ami" "node3"
{
...
}
I would like to create 3 different Launch Configuration and Auto Scaling Group using each of the AMIs respectively.
resource "aws_launch_configuration" "node"
{
count = "${local.node_instance_count}"
# Name-prefix must be used otherwise terraform fails to perform updates to existing launch configurations due to
# a name conflict: LCs are immutable and the LC cannot be destroyed without destroying attached ASGs as well, which
# terraform will not do. Using name-prefix lets a new LC be created and swapped into the ASG.
name_prefix = "${var.environment_name}-node${count.index + 1}-"
image_id = "${data.aws_ami.node[count.index].image_id}"
instance_type = "${var.default_ec2_instance_type}"
...
}
However, I am not able use aws_ami.node1, aws_ami.node2, aws_ami.node3 using the cound.index the way I have shown above. I get the following error:
Error reading config for aws_launch_configuration[node]: parse error at 1:39: expected "}" but found "."
Is there another way I can do this in Terraform?
Indexing data sources isn't something that's doable; at the moment.
You're likely better off simply dropping the data sources you've defined and codifying the image IDs into a Terraform map variable.
variable "node_image_ids" {
type = "map"
default = {
"node1" = "1234434"
"node2" = "1233334"
"node3" = "1222434"
}
}
Then, consume it:
image_id = "${lookup(var.node_image_ids, concat("node", count.index), "some_default_image_id")}"
The downside of this is that you'll need to manually update the image id when images are upgraded.