I am struggling a bit here and am wondering if it would be even possible. I have a vars decalred as shown below:
variable "subnets" {
type = list(object({
name = string
cidr_block = string
}))
default = [
{
name = private
cidr_block = 10.0.0.1/24
},
{
name = public
cidr_block = 10.0.0.2/24
}
]
}
and then I use a data source to query zones in the current region
data aws_availability_zones available {}
now what I'm trying to do is create the above subnets in each az zone and I don't seem able to combine the zones to the above var.
what I am trying is
resource aws_suubnet subnet {
for each = {for idx,az.name in data.aws_availability_zones.available.names : idx => az.name}
vpc_id = var.vpc_id
availability_zone = data.aws_availability_zones.available.names[each.key]
cidr_block = (this is where I want to query my var.subnets but I don't seem to be able to do another for
here)
}
What I am hoping to end up with is 6 subnets 3 private and 3 public with one of each in each of the zones. Would appreciate any help here. Thanks
I think your intent here is to dynamically select two of the available availability zones and declare a subnet in each.
This is possible to do and I will show a configuration example below but first I want to caution that this a potentially-risky design because the set of availability zones can vary over time, and so you might find that without any direct changes to your configuration a later Terraform plan proposes to recreate one or both of your subnets in different availability zones.
For that reason, I'd typically suggest making the assignment of subnets to availability zones something you intentionally choose and encode statically in your configuration, rather than selecting them dynamically, to ensure that your configuration's effect remains stable over time unless you intentionally change it.
With that caveat out of the way, I do still want to answer the general question here, because this general idea of "zipping together" two collections of different lengths can arise in other situations, and so knowing a pattern for it might still prove useful, including if you ultimately decide to make the list of availability zones a variable rather than a data source lookup.
variable "subnets" {
type = list(object({
name = string
cidr_block = string
}))
}
data "aws_availability_zones" "available" {
}
locals {
# The availability zones are returned as an unordered
# set, so we'll sort them to be explicit that we're
# depending on one particular ordering.
zone_names = sort(data.aws_availabililty_zones.available.names)
subnets = tolist([
for i, sn in var.subnets : {
name = sn.name
cidr_block = sn.cidr_block
zone = element(local.zone_names, i)
}
])
}
The last expression in the above example relies on the element function, which is similar to indexing like local.zone_names[i] but instead of returning an error if i is too large it will instead wrap around and reselect the same items from the zone list again.
Related
I am setting up alerting in AWS, using AWS Budgets to trigger an alert if an account’s cost is exceeding x% or x amount of the cost by x date of the month, to identify when spikes in price occur.
resource "aws_budgets_budget" "all-cost-budget" {
name = "all-cost-budget"
budget_type = "COST"
limit_amount = "10"
limit_unit = "USD"
time_unit = "DAILY"
notification {
comparison_operator = "GREATER_THAN"
threshold = "100"
threshold_type = "PERCENTAGE"
notification_type = "ACTUAL"
subscriber_email_addresses = ["email address"]
}
}
We currently do not have a specific limit amount, and would like to set it based on the previous month's spending.
Is there a way to do this dynamically within AWS and Terraform?
You can setup a Lambda function which would automatically execute at the start of every month and update the budget value. The AWS quickstart for Landing Zone has a CloudFormation template which does something similar to what you have described, setting the budget as the rolling average of the last three months (Template, Documentation). You will need to convert the CloudFormation template to Terraform and tweak the criteria to match your requirements. You might also want to consider using FORECASTED instead of ACTUAL.
I'm trying to create a metric on health check for a backend service (load balancer). I need this metric to trigger alerts on failed health checks.
From: https://cloud.google.com/monitoring/api/v3/kinds-and-types
Emulating string-valued custom metrics
String values in custom metrics are not supported, but you can
replicate string-valued metric functionality in the following ways:
Create a GAUGE metric using an INT64 value as an enum that
maps to a string value. Externally translate the enum to a string
value when you query the metric.
Create a GAUGE metric with a BOOL value and a label whose
value is one of the strings you want to monitor. Use the boolean to
indicate if the value is the active value.
For example, suppose you want to create a string-valued metric called "status" with possible options OK, OFFLINE, or PENDING. You could make a GAUGE metric with a label called status_value. Each update would write three time series, one for each status_value (OK, OFFLINE, or PENDING), with a value of 1 for "true" or 0 for "false".
Using Terraform, I tried this, but not sure if it's really converting the values "UNHEALTHY" and "HEALTHY" to 0/1s. I tried to switch metric_type to GAUGE instead of DELTA, but the error from Terraform said I needed to use DELTA, and DISTRIBUTION is required for the value_type. Has anybody tried the docs above where it says GAUGE metric with BOOL value? Don't we need some kind of map of strings to boolean?
Here is my terraform:
resource "google_logging_metric" "logging_metric" {
name = var.name
filter = "logName=projects/[project_id]/logs/compute.googleapis.com%2Fhealthchecks"
metric_descriptor {
metric_kind = "DELTA"
value_type = "DISTRIBUTION"
labels {
key = "status"
value_type = "STRING"
description = "status of health check"
}
display_name = var.display_name
}
value_extractor = "EXTRACT(jsonPayload.request)"
label_extractors = {
"status" = "EXTRACT(jsonPayload.healthCheckProbeResult.healthState)"
}
bucket_options {
linear_buckets {
num_finite_buckets = 3
width = 1
offset = 1
}
}
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_subnet" "subnet" {
count = length(data.aws.availability_zones.available.names)
# ...
}
let say the legion in my area has 4 availability zones. (A,B,C,D)
and the code creates a subnet on each AZs.
but I want to create a subnet on A and B only.
can I achieve that goal by editing this line?
count = length(data.aws.availability_zones.available.names)
Or the only answer is adding another resource?
thank you for your time
If you want to use only the first two AZs, then you can do:
resource "aws_subnet" "subnet" {
count = 2
availability_zone = data.aws_availability_zones.available.names[count.index]
#...
}
I want to create a terraform script that creates a number of VPCs. Then, I want my script to create 'n' subnets in all the VPCs. I want to do that in one subnet resource block. I'm able to create VPCs using count inside the resource block, but unable to use it with subnet. Please help.
There is no such provision for your requirement directly in Terraform, but we can tweak the count to fulfill your requirement.
First, create a resource block that creates some number of VPCs.
resource "aws_vpc" "main" {
count = "${var.vpc_count}"
cidr_block = "${element(var.cidr_prefix, count.index)}.0.0/16"
enable_dns_support = "true"
enable_dns_hostnames = "true"
tags {
Name = "${var.vpc_name}${count.index}"
}
}
You can use interpolation for count as well by passing the value through your variables.tf file or .tfvars file.
Now use this script to create "count" number of subnets in all the VPCs and distributed evenly across all the availability zones.
resource "aws_subnet" "private_subnet" {
count = "${var.subnet_count * var.vpc_count}"
vpc_id = "${element(aws_vpc.main.*.id, count.index % var.vpc_count)}"
cidr_block = "${element(var.cidr_prefix, count.index)}.${count.index}.0/24"
availability_zone = "${element(data.aws_availability_zones.all.names, count.index)}"
tags {
Name = "${var.vpc_name}-${element(var.availability_zone, count.index)}-${count.index}"
}
}
You can seperately define VPC Cidr blocks as a list and subnets CIDR blocks as a list. Although I have used a CIDR prefix and used count to configure the value of CIDR blocks for subnets.
Have a look at the variable cidr_prefix.
variable "cidr_prefix"{
type = "list"
description = "The first 16 bits of the desired cidr block/s. Note: The number of elements in the list should not be less than the specified count of VPCs."
default = ["172.16", "10.0", "143.14", "100.10"]
}
#Shiv - This is a good solution, which I was also able to achieve using variable lists.
The challenge comes if you want to take this further to create route table associations to only specific subnets created.
imagine a Design:
3 vpc's created (mgmt-vpc, shared-vpc & dedicated-vpc)
2 AZs in each vpc (az-1a & az-1b)
2 subnets in each AZ ( pub-sub-az-1a & pvt-sub-az-1a) [in total 12 subnets -- 6 pub & 6 pvt]
2 route tables (mgmt-vpc-pub-rt & mgmt-vpc-pvt-rt)
Not so 'NEAT & SMART' Solution: Noted out subnets element IDs from the plan to modify my out variables, so that correct subnets are tagged. But this is a work-around, as involves manual steps which might bring in errors.
create output variable for each subnet by
output "mgmt_pvt_subnet-1" {
value = aws_subnet.fg-pvt-sub[0].id
}
create multiple route table association by referring the output variable
resource "aws_route_table_association" "mgmt-pvt-subnet-rt-assn" {
route_table_id = aws_route_table.fg-pvt-rt.id
subnet_id = aws_subnet.fg-pvt-sub[0].id
}
Challenge:
issues comes when you have to 'cherry pick' the private subnets belonging to mgmt-vpc to be associated to the mgmt-vpc-pvt-rt.
I'v tried using count, for, for_each, conditions but i'm not able to achieve my goal...
Any pointers would be really appreciated
I'm using boto to return instances with a cluster_id tag which is a string uuid that uniquely identifies a cluster.
I'm trying to use boto to return the instances with that tag to ensure the cluster has been provisioned and is ready. Thus, when the number of individual instances with the cluster_id tag matches the expected number the cluster is ready and my program can begin the next step of automation.
These instances are in an autoscalling group but im not sure why boto returns 0. I have verified the cluster_id is the same in the program, and the same in aws for each instance. Reservations just returns 0.
Python Code
ec2_conn = boto.connect_ec2(aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)
reservations = ec2_conn.get_all_instances(filters={"tag:cluster_id":str(cluster_id_tag)})
instances = [i for r in reservations for i in r.instances]
number_of_instances = len(instances)
cluster_id var in the program = 50a5fab0-e166-11e5-9ee9-a45e60e4b9b1
ASG tags:
ElasticClientNode no Yes
Name elasticsearch-loading-master-nodes-cluster Yes
a_or_b a Yes
cluster_id 50a5fab0-e166-11e5-9ee9-a45e60e4b9b1 Yes
version 1.0 Yes
Instance Tags
ElasticClientNode no Show Column
Name elasticsearch-loading-master-nodes-cluster Hide Column
a_or_b a Show Column
aws:autoscaling:groupName elasticsearch Show Column
cluster_id 50a5fab0-e166-11e5-9ee9-a45e60e4b9b1 Show Column
version 1.0 Show Column
the answer was using connect_to_region not connect_ec2
ec2_conn = boto.ec2.connect_to_region("us-west-2",
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)