Terraform: data source aws_instance doesn't work - amazon-web-services

I'm trying to work with aws_instance data source. I created a simple configuration which should create an ec2 instance and should return ip as output
variable "default_port" {
type = string
default = 8080
}
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/kharandziuk/.aws/creds"
profile = "prototyper"
}
resource "aws_instance" "example" {
ami = "ami-0994c095691a46fb5"
instance_type = "t2.small"
tags = {
name = "example"
}
}
data "aws_instances" "test" {
instance_tags = {
name = "example"
}
instance_state_names = ["pending", "running", "shutting-down", "terminated", "stopping", "stopped"]
}
output "ip" {
value = data.aws_instances.test.public_ips
}
but for some reasons I can't configure data source properly. The result is:
> terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.aws_instances.test: Refreshing state...
Error: Your query returned no results. Please change your search criteria and try again.
on main.tf line 21, in data "aws_instances" "test":
21: data "aws_instances" "test" {
how can I fix it?

You should use depends_on option into data.aws_instances.test.
like :
data "aws_instances" "test" {
instance_tags = {
name = "example"
}
instance_state_names = ["pending", "running", "shutting-down", "terminated", "stopping", "stopped"]
depends_on = [
"aws_instance.example"
]
}
It means that build data.aws_instances.test after make resource.aws_instance.example.
Sometimes, We need to use this option. Because of dependencies of aws resources.
See :
Here's a document about depends_on option.

You don't need a data source here. You can get the public IP address of the instance back from the resource itself, simplifying everything.
This should do the exact same thing:
resource "aws_instance" "example" {
ami = "ami-0994c095691a46fb5"
instance_type = "t2.small"
tags = {
name = "example"
}
}
output "ip" {
value = aws_instance.example.public_ip
}

Related

Unable to find remote state

Error: Unable to find remote state
on ../../modules/current_global/main.tf line 26, in data "terraform_remote_state" "global":
26: data "terraform_remote_state" "global" {
No stored state was found for the given workspace in the given backend
I am really stucked in this issue for a while.
main.tf:
data "aws_caller_identity" "current" {}
locals {
state_buckets = {
"amazon_account_id" = {
bucket = "bucket_name"
key = "key"
region = "region"
}
}
state_bucket = local.state_buckets[data.aws_caller_identity.current.account_id]
}
data "terraform_remote_state" "global" {
backend = "s3"
config = local.state_bucket
}
output "outputs" {
description = "Current account's global Terraform module outputs"
value = data.terraform_remote_state.global.outputs
}
one directoty above there is one main.tf file which has reference of above main.tf file
main.tf:
provider "aws" {
version = "~> 2.0"
region = var.region
allowed_account_ids = ["id"]
}
terraform {
backend "s3" {
bucket = "bucket_name"
key = "key"
region = "region"
}
}
module "global" {
source = "../../modules/current_global"
}

Override a module's local.tf variable in Terraform

I want to override the value of root_volume_type to gp2 in https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/local.tf
This is the only file I created called main.tf in my terraform code. I want to override this in the code and not set it via the command line while running terraform apply
module "eks_example_basic" {
source = "terraform-aws-modules/eks/aws//examples/basic"
version = "14.0.0"
region = "us-east-1"
}
The error is correct because you are sourcing an example, which does not support such variables as workers_group_defaults. You can't overwrite it, unless you fork the example and modify it yourself.
workers_group_defaults is supported in the core module, for instance:
data "aws_vpc" "default" {
default = true
}
data "aws_subnet_ids" "default" {
vpc_id = data.aws_vpc.default.id
}
module "eks_example" {
source = "terraform-aws-modules/eks/aws"
version = "14.0.0"
cluster_name = "SomeEKSCluster"
cluster_version = "1.18"
subnets = data.aws_subnet_ids.default.ids
vpc_id = data.aws_vpc.default.id
workers_group_defaults = { root_volume_type = "gp2" }
}

Applying tags to instances created with for each Terraform

I have multiple EC2 instances created using for each. Each instance is being deployed into a different subnet. I am getting an error when trying to apply tags to each instance being deployed. Any advice would be helpful. Below is the code for my tags and instances:
resource "aws_instance" "private" {
for_each = aws_subnet.private
ami = var.ec2_amis[var.region]
instance_type = var.tableau_instance
key_name = aws_key_pair.tableau.key_name
subnet_id = each.value.id
tags = {
Name = var.ec2_tags[each.key]
}
}
variable "ec2_tags" {
type = list(string)
default = [
"PrimaryEC2",
"EC2Worker1",
"EC2Worker2"
]
}
Error
Error: Invalid index
on vpc.tf line 21, in resource "aws_instance" "private":
21: Name = var.ec2_tags[each.key]
|----------------
| each.key is "3"
| var.ec2_tags is list of string with 3 elements
The given key does not identify an element in this collection value.
I had this code working earlier, not sure what happened. I made a change to the AMI it spins up, but I don't see why that could have an effect on tags. Any advice would be helpful.
UPDATE
I have updated the resource with the following locals block and dynamic block within my "aws_instance" "private" code:
locals {
private_instance = [{
name = "PrimaryEC2"
},
{
name = "EC2Worker1"
},
{
name = "EC2Worker2"
}]
}
dynamic "tags" {
for_each = local.private_instance
content {
Name = tags.value.name
}
}
Error
Error: Unsupported block type
on vpc.tf line 28, in resource "aws_instance" "private":
28: dynamic "tags" {
Blocks of type "tags" are not expected here.
Any advice how to fix would help. Thanks!
If you want to make your tags dynamic, you could create them as follows:
tags = {
Name = each.key == "0" ? "PrimaryEC2" : "EC2Worker${each.key}"
}
You would use it as follows (assuming everything else is OK):
resource "aws_instance" "private" {
for_each = aws_subnet.private
ami = var.ec2_amis[var.region]
instance_type = var.tableau_instance
key_name = aws_key_pair.tableau.key_name
subnet_id = each.value.id
tags = {
Name = each.key == "0" ? "PrimaryEC2" : "EC2Worker${each.key}"
}
}
The code uses conditional expression. It works as follows.
If each.key is equal to "0" (i.e., first instance being created) then its tag will be "PrimaryEC2". All remaining instances will be tagged: "EC2Worker1", "EC2Worker2", "EC2Worker3" and so on for as many subnets there are.
One possible cause of this errors is that the aws_subnet.private variable is longer then the list of ec2 tags which would result in an error when the index 3 is used on your ec2_tags list looking for the 4th (nonexistent element).

Unable to read from terraform.tfstate while using modules

I am using Terraform v0.12.6. I am using modules to create a VPC,Subnets and EC2 instances.
root.tf
vpc.tf
pub_subnet.tf
web_server.tf
vpc.tf and pub_subnet.tf are working fine and displaying the required output. However, I am unable to use the subnet_id from the module pub_subnet.tf as input to my web_server.tf.
The reason being that it is a list and I am getting Inappropriate value for attribute "subnet_id": string required.
Looks like I have to read the terraform.tfstate file.
Here is my present code -
root.tf
provider "aws" {
region = "us-east-1"
}
data "terraform_remote_state" "public_subnet" {
backend = "local"
config = {
path = "terraform.tfstate"
}
}
module "my_vpc" {
source = "../modules/vpc_flowlogs"
vpc_cidr = "10.0.0.0/16"
# vpc_id = "${module.my_vpc.vpc_id}"
}
module "vpc_igw" {
source = "../modules/vpc_igw"
vpc_id = "${module.my_vpc.vpc_id}"
}
module "public_subnets" {
source="../modules/pub_subnets"
vpc_id = "${module.my_vpc.vpc_id}"
}
module "web_servers" {
source = "../modules/webservers"
vpc_id = "${module.my_vpc.vpc_id}"
subnet_id =
"${data.terraform_remote_state.public_subnet.outputs.subnet_id[0]}"
}
web_servers.tf
resource "aws_instance" "web-srvs" {
count="${var.instance_count == "0" ? "1" : var.instance_count}"
ami = "ami-035b3c7efe6d061d5"
instance_type = "t2.nano"
key_name="xxx-dev"
subnet_id = "${var.subnet_id}"
vpc_security_group_ids = ["${aws_security_group.pub_sg.id}"]
associate_public_ip_address=true
}
I am trying to use of the two subnet_ids created.
I have tried different ways but now running out of ideas.
Just as an FYI, my tfstate file is located in the same directory as root.tf
Appreciate any help. OR is this a bug ?
You're requesting a remote state for no reason. Remote state is for referencing output from other configs. You have modules so you should just change it to reference the module resource, but you are going to have to output the values in the module so you can reference it elsewhere.
subnet_id =
"${data.terraform_remote_state.public_subnet.outputs.subnet_id[0]}"
}
Should be
subnet_id =
"${module.public_subnets.subnet.id}"
}
In your subnet module, create an output resource.
output "subnet" {
value = "${aws_subnet.some_subnet.id}"
}

How to prevent cyclic dependencyd when creating signed cert for EC2 instance?

I'm using terraform to create an EC2 instance which will be used as a docker host. This means I need to create encryption keys to securely connect to it over the internet. When creating the keys you need to specify the IP address and hostnames you will be connecting with. In terraform these values can be dynamically allocated, but this easily results in a cyclic dependency situation. Lets use an example:
resource "tls_private_key" "example" {
algorithm = "ECDSA"
}
resource "tls_self_signed_cert" "docker_host_key" {
key_algorithm = "${tls_private_key.example.algorithm}"
private_key_pem = "${tls_private_key.example.private_key_pem}"
validity_period_hours = 12
early_renewal_hours = 3
allowed_uses = ["server_auth"]
dns_names = [ "${aws_instance.example.public_dns}" ]
ip_addresses = [ "${aws_instance.example.public_ip}" ]
subject {
common_name = "example.com"
organization = "example"
}
}
resource "aws_instance" "example" {
count = 1
ami = "ami-d05e75b8"
instance_type = "t2.micro"
subnet_id = "subnet-24h4fos9"
associate_public_ip_address = true
provisioner "remote-exec" {
inline = [
"echo \"${tls_self_signed_cert.docker_host_key.private_key_pem}\" > private_key_pem",
"echo \"${tls_self_signed_cert.docker_host_key.cert_pem}\" > cert_pem",
"echo \"${tls_private_key.docker_host_key.private_key_pem}\" > private_key_pem2",
]
}
}
In the remote-exec provisioner we need to write values from the tls_self_signed_cert resource, which in turn needs values from the aws_instance resource.
How can I overcome this situation?
You can use an aws_eip resource to create an Elastic IP and attach it to the instance with aws_eip_association.
resource "aws_eip" "eip" {
...
}
resource "aws_eip_association" "eip" {
allocation_id = "${aws_eip.eip.id}"
instance_id = "${aws_instance.example.id}"
}
resource "tls_self_signed_cert" "docker_host_key" {
# set something here from Route53 instead: dns_names = [ "${aws_instance.example.public_dns}" ]
ip_addresses = [ "${aws_eip.eip.public_ip}" ]
...
}