how can i use the data source in aws_security group.
i have a security group in my aws account how can i call the existing security group in my terraform code to newly created instance i am using the terraform data resouces but i am getting the error i have pasted the my code and error as well any one can please tell me how to resolve the error.
provider "aws" {
profile = "default"
region = "us-east-2"
}
data "aws_vpc" "tesing" {
filter {
name = "tag:Name"
values = ["test-vpc"]
}
}
data "aws_security_group" "sg" {
filter {
name = "group-name"
values = ["testing"]
}
filter {
name = "vpc-id"
values = ["data.aws_vpc.testing.id"]
}
}
resource "aws_instance" "example" {
ami = "ami-03657b56516ab7912"
instance_type = "t2.micro"
vpc_security_group_ids = ["data.aws_security_group.sg.id"]
}
output "ipddress" {
value = aws_instance.example.public_ip
}
i am getting the below error can u please help me out how to resolve the this error
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.aws_security_group.sg: Refreshing state...
data.aws_vpc.tesing: Refreshing state...
Error: InvalidParameterValue: vpc-id
status code: 400, request id: 22e0f8c9-2265-4077-b271-6231b4787db1
Error: no matching VPC found
how to resolve this:
First you have spelling mistake:
aws_vpc" "tesing"
It should be:
aws_vpc" "testing"
Second,
values = ["data.aws_vpc.testing.id"]
should be:
values = [data.aws_vpc.testing.id]
Related
I am getting the error below while provisioning Composer via terraform.
Error: Error waiting to create Environment: Error waiting to create Environment: Error waiting for Creating Environment: error while retrieving operation: Get "https://composer.googleapis.com/v1beta1/projects/aayush-terraform/locations/us-central1/operations/ee459492-abb0-4646-893e-09d112219d79?alt=json&prettyPrint=false": write tcp 10.227.112.165:63811->142.251.12.95:443: write: broken pipe. An initial environment was or is still being created, and clean up failed with error: Getting creation operation state failed while waiting for environment to finish creating, but environment seems to still be in 'CREATING' state. Wait for operation to finish and either manually delete environment or import "projects/aayush-terraform/locations/us-central1/environments/example-composer-env" into your state.
Below is the code snippet:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~>3.0"
}
}
}
variable "gcp_region" {
type = string
description = "Region to use for GCP provider"
default = "us-central1"
}
variable "gcp_project" {
type = string
description = "Project to use for this config"
default = "aayush-terraform"
}
provider "google" {
region = var.gcp_region
project = var.gcp_project
}
resource "google_service_account" "test" {
account_id = "composer-env-account"
display_name = "Test Service Account for Composer Environment"
}
resource "google_project_iam_member" "composer-worker" {
role = "roles/composer.worker"
member = "serviceAccount:${google_service_account.test.email}"
}
resource "google_compute_network" "test" {
name = "composer-test-network"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "test" {
name = "composer-test-subnetwork"
ip_cidr_range = "10.2.0.0/16"
region = "us-central1"
network = google_compute_network.test.id
}
resource "google_composer_environment" "test" {
name = "example-composer-env"
region = "us-central1"
config {
node_count = 3
node_config {
zone = "us-central1-a"
machine_type = "n1-standard-1"
network = google_compute_network.test.id
subnetwork = google_compute_subnetwork.test.id
service_account = google_service_account.test.name
}
}
}
NOTE: Composer is getting created even after this error is being thrown and I am provisioning this composer via service account which has been given owner access.
I had the same problem and I solved it by giving the "composer.operations.get
" permission to the service account which is provisioning the Composer.
This permission is part of the Composer Administrator role.
To prevent future operations like updates or deletion through Terraform, I think it's better to use the role rather than a single permission.
Or if you want to make some least privileges work, you can first use the role, then removing permissions you think you won't need and test your terraform code.
We are spinning up G4 instances in AWS through Terraform and often encounter issues where one or two of the AZs in the given Region don't support G4 Instance type.
As of now I have hardcoded our TF configuration as per below where I am creating a map of Region and AZs as "azs" variable. From this map I can spin up clusters in targeted AZs of the Region where we have G4 Instance support.
I am using aws command line mentioned in this AWS article to find which AZs are supported in a given Region and updating our "azs" variable as we expand to other Regions.
variable "azs" {
default = {
"us-west-2" = "us-west-2a,us-west-2b,us-west-2c"
"us-east-1" = "us-east-1a,us-east-1b,us-east-1e"
"eu-west-1" = "eu-west-1a,eu-west-1b,eu-west-1c"
"eu-west-2" = "eu-west-2a,eu-west-2b,eu-west-2c"
"eu-west-3" = "eu-west-3a,eu-west-3c"
}
However the above approach required human intervention and updates frequently (If AWS adds support to non-supported AZs in a given region later on )
There is this stack overflow question where User is trying to do the same thing however he can use the fallback instance type lets say if any of the AZs are not supported for given instance type.
In my use-case , I can't use any other fall back instance type since our app-servers only runs on G4.
I have tried to use the workaround mentioned as an Answer in the above stack overflow question however its failing with the following error message.
Error: no EC2 Instance Type Offerings found matching criteria; try
different search
on main.tf line 8, in data "aws_ec2_instance_type_offering"
"example": 8: data "aws_ec2_instance_type_offering" "example" {
I am using the TF config as below where my preferred_instance_types is g4dn.xlarge.
provider "aws" {
version = "2.70"
}
data "aws_availability_zones" "all" {
state = "available"
}
data "aws_ec2_instance_type_offering" "example" {
for_each = toset(data.aws_availability_zones.all.names)
filter {
name = "instance-type"
values = ["g4dn.xlarge"]
}
filter {
name = "location"
values = [each.value]
}
location_type = "availability-zone"
preferred_instance_types = ["g4dn.xlarge"]
}
output "foo" {
value = { for az, details in data.aws_ec2_instance_type_offering.example : az => details.instance_type }
}
I would like to know how to handle this failure as Terraform is not able to find the g4 instance type in one of the AZs of a given region and failing.
Is there any Terraform Error handling I can do to by pass this error for now and get the supported AZs as an Output ?
I had checked that other question you mentioned earlier, but i could never get the output correctly. Thanks to #ydaetskcoR for this response in that post - I could learn a bit and get my loop working.
Here is one way to accomplish what you are looking for... Let me know if it works for you.
Instead of "aws_ec2_instance_type_offering", use "aws_ec2_instance_type_offerings" ... (there is a 's' in the end. they are different Data Sources...
I will just paste the code here and assume you will be able to decode the logic. I am filtering for one specific instance type and if its not supported, instance_types will be black and i make a list of AZ thats does not do not have blank values.
variable "az" {
default="us-east-1"
}
variable "my_inst" {
default="g4dn.xlarge"
}
data "aws_availability_zones" "example" {
filter {
name = "opt-in-status"
values = ["opt-in-not-required"]
}
}
data "aws_ec2_instance_type_offerings" "example" {
for_each=toset(data.aws_availability_zones.example.names)
filter {
name = "instance-type"
values = [var.my_inst]
}
filter {
name = "location"
values = ["${each.key}"]
}
location_type = "availability-zone"
}
output "az_where_inst_avail" {
value = keys({ for az, details in data.aws_ec2_instance_type_offerings.example :
az => details.instance_types if length(details.instance_types) != 0 })
}
The output will look like below. us-east-1e does not have the instance type and its not there in the Output. Do test a few cases to see if it works everytime.
Outputs:
az_where_inst_avail = [
"us-east-1a",
"us-east-1b",
"us-east-1c",
"us-east-1d",
"us-east-1f",
]
I think there's a cleaner way. The data source already filters for the availability zone based off of the given filter. There is an attribute -> locations that will produce a list of the desired location_type.
provider "aws" {
region = var.region
}
data "aws_ec2_instance_type_offerings" "available" {
filter {
name = "instance-type"
values = [var.instance_type]
}
location_type = "availability-zone"
}
output "azs" {
value = data.aws_ec2_instance_type_offerings.available.locations
}
Where the instance_type is t3.micro and region is us-east-1, this accurately produces:
azs = tolist([
"us-east-1d",
"us-east-1a",
"us-east-1c",
"us-east-1f",
"us-east-1b",
])
You don't need to feed it a list of availability zones because it already gets those from the supplied region.
I'm getting the following error when trying to initially plan or apply a resource that is using the data values from the AWS environment to a count.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: Invalid count argument
on main.tf line 24, in resource "aws_efs_mount_target" "target":
24: count = length(data.aws_subnet_ids.subnets.ids)
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
$ terraform --version
Terraform v0.12.9
+ provider.aws v2.30.0
I tried using the target option but doesn't seem to work on data type.
$ terraform apply -target aws_subnet_ids.subnets
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
The only solution I found that works is:
remove the resource
apply the project
add the resource back
apply again
Here is a terraform config I created for testing.
provider "aws" {
version = "~> 2.0"
}
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
resource aws_default_vpc default {
}
data aws_subnet_ids subnets {
vpc_id = aws_default_vpc.default.id
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
Finally figured out the answer after researching the answer by Dude0001.
Short Answer. Use the aws_vpc data source with the default argument instead of the aws_default_vpc resource. Here is the working sample with comments on the changes.
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
// Delete this --> resource aws_default_vpc default {}
// Add this
data aws_vpc default {
default = true
}
data "aws_subnet_ids" "subnets" {
// Update this from aws_default_vpc.default.id
vpc_id = "${data.aws_vpc.default.id}"
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
What I couldn't figure out was why my work around of removing aws_efs_mount_target on the first apply worked. It's because after the first apply the aws_default_vpc was loaded into the state file.
So an alternate solution without making change to the original tf file would be to use the target option on the first apply:
$ terraform apply --target aws_default_vpc.default
However, I don't like this as it requires a special case on first deployment which is pretty unique for the terraform deployments I've worked with.
The aws_default_vpc isn't a resource TF can create or destroy. It is the default VPC for your account in each region that AWS creates automatically for you that is protected from being destroyed. You can only (and need to) adopt it in to management and your TF state. This will allow you to begin managing and to inspect when you run plan or apply. Otherwise, TF doesn't know what the resource is or what state it is in, and it cannot create a new one for you as it s a special type of protected resource as described above.
With that said, go get the default VPC id from the correct region you are deploying in your account. Then import it into your TF state. It should then be able to inspect and count the number of subnets.
For example
terraform import aws_default_vpc.default vpc-xxxxxx
https://www.terraform.io/docs/providers/aws/r/default_vpc.html
Using the data element for this looks a little odd to me as well. Can you change your TF script to get the count directly through the aws_default_vpc resource?
I am trying to dynamically declare multiple aws_nat_gateway data sources by retrieving the list of public subnets through the aws_subnet_ids data source. However, when I try to set the count parameter to be equal to the length of the subnet IDs, I get an error saying The "count" value depends on resource attributes that cannot be determined until apply....
This is almost in direct contradiction to the example in their documentation!. How do I fix this? Is their documentation wrong?
I am using Terraform v0.12.
data "aws_vpc" "environment_vpc" {
id = var.vpc_id
}
data "aws_subnet_ids" "public_subnet_ids" {
vpc_id = data.aws_vpc.environment_vpc.id
tags = {
Tier = "public"
}
depends_on = [data.aws_vpc.environment_vpc]
}
data "aws_nat_gateway" "nat_gateway" {
count = length(data.aws_subnet_ids.public_subnet_ids.ids) # <= Error
subnet_id = data.aws_subnet_ids.public_subnet_ids.ids.*[count.index]
depends_on = [data.aws_subnet_ids.public_subnet_ids]
}
I expect to be able to apply this template successfully, but I am getting the following error:
Error: Invalid count argument
on ../src/variables.tf line 78, in data "aws_nat_gateway" "nat_gateway":
78: count = "${length(data.aws_subnet_ids.public_subnet_ids.ids)}"
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
It seems you are trying to fetch subnets that weren't created yet or they couldn't be determinated, the terraform cmd output suggests you add -target flag to create the VPC and subnets or do another task first, after that, you'll apply the nat_gateway resource. I suggest you use the AZs list instead of subnets ids, I'll add a simple example below.
variable "vpc_azs_list" {
default = [
"us-east-1d",
"us-east-1e"
]
}
resource "aws_nat_gateway" "nat" {
count = var.enable_nat_gateways ? length(var.azs_list) : 0
allocation_id = "xxxxxxxxx"
subnet_id = "xxxxxxxxx"
depends_on = [
aws_internet_gateway.main,
aws_eip.nat_eip,
]
tags = {
"Name" = "nat-gateway-name"
"costCenter" = "xxxxxxxxx"
"owner" = "xxxxxxxxx"
}
}
I hope will be useful to you and other users.
I have the following deploy.tf file:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "us_west_1"
region = "us-west-2"
}
resource "aws_us_east_1" "my_test" {
# provider = "aws.us_east_1"
count = 1
ami = "ami-0820..."
instance_type = "t2.micro"
}
resource "aws_us_west_1" "my_test" {
provider = "aws.us_west_1"
count = 1
ami = "ami-0d74..."
instance_type = "t2.micro"
}
I am trying to use it to deploy 2 servers, one in each region. I keep getting errors like:
aws_us_east_1.narc_test: Provider doesn't support resource: aws_us_east_1
I have tried setting alias's for both provider blocks, and referring to the correct region in a number of different ways. I've read up on multi region support, and some answers suggest this can be accomplished with modules, however, this is a simple test, and I'd like to keep it simple. Is this currently possible?
Yes it can be used to create resources in different regions even inside just one file. There is no need to use modules for your test scenario.
Your error is caused by a typo probably. If you want to launch an ec2 instance the resource you wanna create is aws_instance and not aws_us_west_1 or aws_us_east_1.
Sure enough Terraform does not know this kind of resource since it does simply not exist. Change it to aws_instance and you should be good to go! Additionally you should probably name them differently to avoid double naming using my_test for both resources.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.