how to use != in terraform data block - amazon-web-services

I want to get a list of ec2 instance ids, but the following is wrong, how can I get all the ec2 except Role = ngx?
data "aws_instances" "ec2" {
filter {
name = "Role"
values != ["ngx"]
}
}

You can't do this. First Role is the wrong filter. Maybe you wanted iam-instance-profile.arn? Second, you can't do inverse searches.
You have to get all the instances, and the filter it yourself in locals.

Related

multiple EC2 Instances matched; use additional constraints to reduce matches to a single EC2 Instance

I am using the following script to query a particular instance. There will be only one running instance with the given name. It is possible that another instance with the same name may exists but in different instance state.
How do I filter instance on instance's state so it only retrieves instance that is running state?
data "aws_instance" "ec2" {
filter {
name = "tag:Name"
values = ["dev-us-west-2-myinstance"]
}
}
Currently I get the following error
multiple EC2 Instances matched; use additional constraints to reduce
matches to a single EC2 Instance
The terraform documentation, links to the AWS documentation for the describe-instances filter.
That documentation indicates you should do the following:
data "aws_instance" "ec2" {
filter {
name = "tag:Name"
values = ["dev-us-west-2-myinstance"]
}
filter {
name = "instance-state-name"
values = ["running"]
}
}

Get AWS account ID by name

I know there are multiple ways to get AWS account name by its ID, but is the opposite possible? Is there a way to programmatically (API, CLI, terraform etc.) get AWS account ID by its name?
Update: Forgot to mention that these accounts exist under organization structure in a specific OU, maybe this could help.
While this is not ideal, I realized that aws organizations list-accounts-for-parent command is the best compromise. It would give me all accounts within given OU, which I can filter by account name.
Given that my solution will ultimately be implemented in terraform I came out with something like this
data "external" "accounts" {
program = ["aws", "organizations", "list-accounts-for-parent", "--parent-id", local.ou, "--query", "Accounts[?Name==`${local.account_name}`] | [0]"]
}
locals {
ou = "ou-12345678"
account_name = "my-cool-account"
account_id = lookup(data.external.tools_accounts.result, "Id", null)
}
it would execute AWS CLI command, return back a map of key/values if account info is found, and lookup function would retrieve the account ID.
I was able to solve with the following:
data "aws_organizations_organization" "main" {}
locals {
account-name = "account1"
account-index = index(data.aws_organizations_organization.main.accounts.*.name, local.account-name)
account-id = data.aws_organizations_organization.main.accounts[local.account-index].id
}
output "account_id" {
value = local.account-id
}

Terraform: Handle error if no EC2 Instance Type offerings found in AZ

We are spinning up G4 instances in AWS through Terraform and often encounter issues where one or two of the AZs in the given Region don't support G4 Instance type.
As of now I have hardcoded our TF configuration as per below where I am creating a map of Region and AZs as "azs" variable. From this map I can spin up clusters in targeted AZs of the Region where we have G4 Instance support.
I am using aws command line mentioned in this AWS article to find which AZs are supported in a given Region and updating our "azs" variable as we expand to other Regions.
variable "azs" {
default = {
"us-west-2" = "us-west-2a,us-west-2b,us-west-2c"
"us-east-1" = "us-east-1a,us-east-1b,us-east-1e"
"eu-west-1" = "eu-west-1a,eu-west-1b,eu-west-1c"
"eu-west-2" = "eu-west-2a,eu-west-2b,eu-west-2c"
"eu-west-3" = "eu-west-3a,eu-west-3c"
}
However the above approach required human intervention and updates frequently (If AWS adds support to non-supported AZs in a given region later on )
There is this stack overflow question where User is trying to do the same thing however he can use the fallback instance type lets say if any of the AZs are not supported for given instance type.
In my use-case , I can't use any other fall back instance type since our app-servers only runs on G4.
I have tried to use the workaround mentioned as an Answer in the above stack overflow question however its failing with the following error message.
Error: no EC2 Instance Type Offerings found matching criteria; try
different search
on main.tf line 8, in data "aws_ec2_instance_type_offering"
"example": 8: data "aws_ec2_instance_type_offering" "example" {
I am using the TF config as below where my preferred_instance_types is g4dn.xlarge.
provider "aws" {
version = "2.70"
}
data "aws_availability_zones" "all" {
state = "available"
}
data "aws_ec2_instance_type_offering" "example" {
for_each = toset(data.aws_availability_zones.all.names)
filter {
name = "instance-type"
values = ["g4dn.xlarge"]
}
filter {
name = "location"
values = [each.value]
}
location_type = "availability-zone"
preferred_instance_types = ["g4dn.xlarge"]
}
output "foo" {
value = { for az, details in data.aws_ec2_instance_type_offering.example : az => details.instance_type }
}
I would like to know how to handle this failure as Terraform is not able to find the g4 instance type in one of the AZs of a given region and failing.
Is there any Terraform Error handling I can do to by pass this error for now and get the supported AZs as an Output ?
I had checked that other question you mentioned earlier, but i could never get the output correctly. Thanks to #ydaetskcoR for this response in that post - I could learn a bit and get my loop working.
Here is one way to accomplish what you are looking for... Let me know if it works for you.
Instead of "aws_ec2_instance_type_offering", use "aws_ec2_instance_type_offerings" ... (there is a 's' in the end. they are different Data Sources...
I will just paste the code here and assume you will be able to decode the logic. I am filtering for one specific instance type and if its not supported, instance_types will be black and i make a list of AZ thats does not do not have blank values.
variable "az" {
default="us-east-1"
}
variable "my_inst" {
default="g4dn.xlarge"
}
data "aws_availability_zones" "example" {
filter {
name = "opt-in-status"
values = ["opt-in-not-required"]
}
}
data "aws_ec2_instance_type_offerings" "example" {
for_each=toset(data.aws_availability_zones.example.names)
filter {
name = "instance-type"
values = [var.my_inst]
}
filter {
name = "location"
values = ["${each.key}"]
}
location_type = "availability-zone"
}
output "az_where_inst_avail" {
value = keys({ for az, details in data.aws_ec2_instance_type_offerings.example :
az => details.instance_types if length(details.instance_types) != 0 })
}
The output will look like below. us-east-1e does not have the instance type and its not there in the Output. Do test a few cases to see if it works everytime.
Outputs:
az_where_inst_avail = [
"us-east-1a",
"us-east-1b",
"us-east-1c",
"us-east-1d",
"us-east-1f",
]
I think there's a cleaner way. The data source already filters for the availability zone based off of the given filter. There is an attribute -> locations that will produce a list of the desired location_type.
provider "aws" {
region = var.region
}
data "aws_ec2_instance_type_offerings" "available" {
filter {
name = "instance-type"
values = [var.instance_type]
}
location_type = "availability-zone"
}
output "azs" {
value = data.aws_ec2_instance_type_offerings.available.locations
}
Where the instance_type is t3.micro and region is us-east-1, this accurately produces:
azs = tolist([
"us-east-1d",
"us-east-1a",
"us-east-1c",
"us-east-1f",
"us-east-1b",
])
You don't need to feed it a list of availability zones because it already gets those from the supplied region.

Create snapshots of multiple EBS volumes using Terraform

I am trying to create snapshots of certain EBS volumes based on tags in a particular AWS region using Terraform.
I have tried filtering EBS volumes based on Tags. I can get a clear output of EBS volume id when only one tag value is specified in the filter attribute but for more than one values, i get the following error:
data.aws_ebs_volume.ebs_volume: data.aws_ebs_volume.ebs_volume: Your
query returned more than one result. Please try a more specific search
criteria, or set most_recent attribute to true.
Below is my terraform template:
data "aws_ebs_volume" "ebs_volume" {
filter {
name = "tag:Name"
values = ["EBS1","EBS2","EBS3"]
}
}
output "ebs_volume_id" {
value = "${data.aws_ebs_volume.ebs_volume.id}"
}
resource "aws_ebs_snapshot" "ebs_volume" {
volume_id = "${data.aws_ebs_volume.ebs_volume.id}"
}
Is there a clear way to create snapshots of multiple EBS volumes using any kind of looping statement in terraform?
You can use the count meta parameter to loop over lists, creating multiple resources or data sources.
In your case you could do something like this:
variable "ebs_volumes" {
default = [
"EBS1",
"EBS2",
"EBS3",
]
}
data "aws_ebs_volume" "ebs_volume" {
count = "${length(var.ebs_volumes)}"
filter {
name = "tag:Name"
values = ["${var.ebs_volumes[count.index]}"]
}
}
output "ebs_volume_ids" {
value = ["${data.aws_ebs_volume.ebs_volume.*.id}"]
}
resource "aws_ebs_snapshot" "ebs_volume" {
count = "${length(var.ebs_volumes)}"
volume_id = "${data.aws_ebs_volume.ebs_volume.*.id[count.index]}"
}

Terraform and DigitalOcean: assign volume to specific droplet created with count parameter

just started exploring terraform to spin up droplets and volumes on digital ocean.
My question is to know the right way to do the following:
create a certain number of droplet instances using count within digitalocean_dropletresource named ubuntu16
assign a digitalocean_volume only to one or a subset of previously created droplets.
How to do it?I was assuming to use droplets_id property on digitalocean_volume resource. Something like:
resource "digitalocean_volume" "foovolume" {
...
droplet_ids = ["${digitalocean_droplet.ubuntu16.0.id}"]
}
Validating it with terraform validate I got:
Error: digitalocean_volume.foovolume: "droplet_ids": this field cannot be set
Any advice? Thanks to any inputs on it.
Regards
The way the Terraform provider for DigtialOcean is currently implemented requires that you take the opposite approach. You can specify which volumes are attached to which Droplets by defining the volume_ids of the Droplet resource. For example:
resource "digitalocean_volume" "volume" {
region = "nyc3"
count = 3
name = "volume-${count.index + 1}"
size = 100
description = "an example volume"
}
resource "digitalocean_droplet" "web" {
count = 3
image = "ubuntu-17-10-x64"
name = "web-${count.index + 1}"
region = "nyc3"
size = "1gb"
volume_ids = ["${element(digitalocean_volume.volume.*.id, count.index)}"]
}
If you look at the docs for the volume resource, you'll see that droplet_ids is a "computed" field. This means that you are unable to set the field, and that its value is computed by Terraform via the provider's API.