I am trying to get the vpc_id of default vpc in my aws account using terraform
This is what I tried but it gives an error
Error: Invalid data source
this is what I tried:
data "aws_default_vpc" "default" {
}
# vpc
resource "aws_vpc" "kubernetes-vpc" {
cidr_block = "${var.vpc_cidr_block}"
enable_dns_hostnames = true
tags = {
Name = "kubernetes-vpc"
}
}
The aws_default_vpc is indeed not a valid data source. But the aws_vpc data source does have a boolean default you can use to choose the default vpc:
data "aws_vpc" "default" {
default = true
}
For completeness, I'll add that an aws_default_vpc resource exists that also manages the default VPC and implements the resource life-cycle without really creating the VPC* but would make changes in the resource like changing tags (and that includes its name).
* Unless you forcefully destroy the default VPC
From the docs:
This is an advanced resource and has special caveats to be aware of when using it. Please read this document in its entirety before using this resource.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/default_vpc
This
resource "aws_default_vpc" "default" {
}
will do.
I think this is convenient for terraform projects managing a whole AWS account, but I would advise against using it whenever multiple terraform projects are deployed in a single organization account. You should better stay with #blokje5's answer in that case.
Related
Overview
Currently, dashboards are being deployed via Terraform using values from a dictionary in locals.tf:
resource "aws_cloudwatch_dashboard" "my_alb" {
for_each = local.env_mapping[var.env]
dashboard_name = "${each.key}_alb_web_operational"
dashboard_body = templatefile("templates/alb_ops.tpl", {
environment = each.value.env,
account = each.value.account,
region = each.value.region,
alb = each.value.alb
tg = each.value.alb_tg
}
This leads to fragility because the values of AWS infrastructure resources like the ALB and ALB target group are hard coded. Sometimes when applying updates AWS resources are destroyed and recreated.
Question
What's the best approach to get these values dynamically? For example, this could be achieved by writing a Python/Boto3 Lambda, which looks up these values and then passes them to Terraform as env variables. Are there any other recommended ways to achieve the same?
It depends on how much environment is dynamical. But sounds like Terraform data sources is what you are looking for.
Usually, loadbalancer names are fixed or generated by some rule and should be known before creating dashboard.
Let's suppose that names are fixed, and names are:
variable "loadbalancers" {
type = object
default = {
alb01 = "alb01",
alb02 = "alb02"
}
}
In this case loadbalancers may be taken by:
data "aws_lb" "albs" {
for_each = var.loadbalancers
name = each.value # or each.key
}
And after that you will be able to get dynamically generated parameters:
data.aws_lb["alb01"].id
data.aws_lb["alb01"].arn
etc
If loadbalancer names are generated by some rule, you should use aws cli or aws cdk to get all names, or just generate names by same rule as it was generated inside AWS environment and pass inside Terraform variable.
Notice: terraform plan (apply, destroy) will raise error if you pass non-existent name. You should check if LB with provided name exists.
I work in an environment that is generally the same, however sometimes I have a customer who wants to use a different DHCP options set from the standard we build and associate via Terraform. It isn't feasible to use a custom Terraform resource due to the sheer number of customers we work with and how we roll out changes via Terraform.
My problem is, the "aws_vpc_dhcp_options_association" resource is completely deleted when the association is changed manually by the customer, therefore I haven't been able to use the "ignore_changes" lifecycle object. So basically, what I need Terraform to do is create the association if one doesn't exist at all (e.g. when the AWS account is created) and ignore whatever happens to the association after that.
resource "aws_vpc_dhcp_options" "this" {
domain_name = "domain.com"
domain_name_servers = ["AmazonProvidedDNS"]
ntp_servers = ["1.2.3.4", "1.2.3.4"]
netbios_node_type =
tags = {
Name = "name"
}
}
resource "aws_vpc_dhcp_options_association" "assocation" {
vpc_id = vpc-id
dhcp_options_id = aws_vpc_dhcp_options.this.id
}
Tried ignore changes, preconditions
I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html
Google recommends deleting and creating your own VPC for prod
This resource manages the default VPC: https://www.terraform.io/docs/providers/aws/r/default_vpc.html
But I want to set a different VPC to be the default and delete the auto created one.
How is this possible?
You can avoid/skip the default network creation by setting an Organization Policy Constraint.
gcloud resource-manager org-policies enable-enforce \
constraints/compute.skipDefaultNetworkCreation \
--organization ORGANIZATION_ID
more details in Organization Policy Constraints and Using boolean constraints in organization policy
The default network does not have any specific configuration that makes it be the default network. It is just the one network that is always created together with a new project, and whenever a network is not specified (for instance, when deploying a GAE flex application), the network used will be the one with the name default. When you create a project with Terraform, you can specify auto_network_creation = "false".
However, this will not prevent the creation of the default network, it will just delete it before the project is fully created. This means that, during the Terraform creation, it is not possible to create another network called default. That must be done after the original default network is created, hence, after the project creation.
You can try creating projects with Terraform using this tutorial.
The next snippet is part of the tutorial, in which I included the line to delete the default network on project creation.
variable "project_name" {}
variable "billing_account" {}
variable "org_id" {}
variable "region" {}
provider "google" {
region = "${var.region}"
}
resource "random_id" "id" {
byte_length = 4
prefix = "${var.project_name}-"
}
resource "google_project" "project" {
name = "${var.project_name}"
project_id = "${random_id.id.hex}"
billing_account = "${var.billing_account}"
org_id = "${var.org_id}"
auto_create_network = "false" //This is supposed to delete default network on project creation
}
resource "google_project_services" "project" {
project = "${google_project.project.project_id}"
services = [
"compute.googleapis.com"
]
}
output "project_id" {
value = "${google_project.project.project_id}"
}
Nonetheless, I have tried it myself and the default network was still there.
As in Terraform you describe desired state of your configuration it is not possible to implicit send "destroy request" to a resource that is not managed by Terraform.
However you could try importing it firstly then it will be managed by Terraform and as you do not include it in your *.tf files the default subnet should be deleted during terraform apply step.
Setting property auto_create_network = "false" and mentioning a billing account ID, while creating a GCP project as in the below code snippet, ensures that default network gets deleted.
resource "google_project" "project" {
name = "test"
project_id = "test-523"
billing_account = "xxxxx"
auto_create_network = "false"
}
I am facing an issue in terraform where I want to read details of some existing resource (r1) created via AWS web console.
I am using those details in creation on new resource (r2) via terraform.
Problem is that it is trying to destroy and recreate that resource which is not desired as it will be failed. How can I manage not to destroy and recreate r1 when I do terraform apply.
Here is how I am doing it :
main.tf
resource "aws_lb" "r1"{
}
...
resource "aws_api_gateway_integration" "r2" {
type = "HTTP"
uri = "${aws_lb.r1.dns_name}}/o/v1/multi/get/m/content"
}
first I import that resource
terraform import aws_lb.r1 {my_arn}
next I apply terraform
terraform apply
error
aws_lb.r1: Error deleting LB: ResourceInUse: Load balancer 'my_arn' cannot be deleted because it is currently associated with another service
The import statement is meant for taking control over existing resources in your Terraform setup.
If your only intention is to derive information on existing resources (outside of your Terraform control), data sources are designed specifically for this need:
data "aws_lb" "r1" {
name = "lb_foo"
arn = "some_specific_arn" #you can use any selector you wish to query the correct LB
}
resource "aws_api_gateway_integration" "r2" {
type = "HTTP"
uri = "${data.aws_lb.r1.dns_name}/o/v1/multi/get/m/content"
}
You can add a lifecycle configuration block in the resource "aws_lb" "r1" (see: https://www.terraform.io/docs/configuration/resources.html#lifecycle) to tell Terraform to ignore changes in the resource.
I guess something like this should work:
resource "aws_lb" "r1"{
lifecycle {
ignore_changes = ["*"]
}
}