Trying to learn to use Terraform (v 0.3.7) with Amazon Web Services.
When I create a VPC using Terraform via the following:
resource "aws_vpc" "test-vpc" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
tags {
Name = "test-vpc"
}
}
The VPC will have a main routing table and a "default" security group automatically created (I assume by AWS, rather than Terraform); these can be identified by the attributes on the created VPC: main_route_table_id and default_security_group_id.
While following this tutorial it talks about creating your own default security group and routing table - it makes no mention of the default ones that will get created (even if you create your own routing table, the "main" one created by default will just remain sitting there, associated with no subnets or anything).
Shouldn't we be using the default resource that created with a VPC? Especially the routing table, will there be any effects because of not using the "main" routing table?
And if I should be using the default resources, how do I do that with Terraform?
I couldn't see anything in the Terraform documentation about these default resources, and if I try to override them (for example by telling Terraform to create a security group with name default, I get errors).
AWS creates these default routing tables and sec groups. If you don't use them ( I know we don't) they are fine to get deleted.
Terraform throws errors if you require it to create default sec group as probably the group is already there or maybe this sec group name is reserved.
You can create one new resource "aws_security_group" ( https://terraform.io/docs/providers/aws/r/security_group.html )and have a dependency listed on the resource with
depends_on = ["aws_instance.instance-name-from-resource"]
for the instance thus sec group will be created first and then assign sec groups to the instance with "security_groups"
Related
I'm quite new to Terraform, and struggling with something.
I'm playing around with Redshift for a personal project, and I want to update the inbound security rules for the default security group which is applied to Redshift when it's created.
If I were doing it in AWS Console, I'd be adding a new inbound rule with Type being All Traffic and Source being Anywhere -IPv4 which adds 0.0.0.0/0.
Below in main.tf I've tried to create a new security group and apply that to Redshift, but I get a VPC-by-Default customers cannot use cluster security groups error.
What is it I'm doing wrong?
resource "aws_redshift_cluster" "redshift" {
cluster_identifier = "redshift-cluster-pipeline"
skip_final_snapshot = true terraform destroy
master_username = "awsuser"
master_password = var.db_password
node_type = "dc2.large"
cluster_type = "single-node"
publicly_accessible = "true"
iam_roles = [aws_iam_role.redshift_role.arn]
cluster_security_groups = [aws_redshift_security_group.redshift-sg.name]
}
resource "aws_redshift_security_group" "redshift-sg" {
name = "redshift-sg"
ingress {
cidr = "0.0.0.0/0"
}
The documentation for the Terraform resource aws_redshift_security_group states:
Creates a new Amazon Redshift security group. You use security groups
to control access to non-VPC clusters
The error message you are receiving is clearly staging that you are using the wrong type of security group, and you need to use a VPC security group instead. Once you create the appropriate VPC security group, you would set it in the aws_redshift_cluster resource via the vpc_security_group_ids property.
I'm wanted to know if there is possible in Terraform to get the Default VPC ID without need to write the ID or the Name on the manifest and save it on a variable. My idea is to invoke it only knowing that is the Default VPC but not write the ID or Name, just because is the Default VPC.
Thank you
FT
This will get you a reference to the default VPC in the current AWS region:
data "aws_vpc" "default" {
default = true
}
This is documented here.
Note that this gives you a reference to the VPC, so you can pass the ID to other resources. Terraform does not manage the VPC when you do this, it simply references it. This is different from terraform import which causes Terraform to start managing the VPC, and requires that you pass it the VPC ID.
I am trying to get the vpc_id of default vpc in my aws account using terraform
This is what I tried but it gives an error
Error: Invalid data source
this is what I tried:
data "aws_default_vpc" "default" {
}
# vpc
resource "aws_vpc" "kubernetes-vpc" {
cidr_block = "${var.vpc_cidr_block}"
enable_dns_hostnames = true
tags = {
Name = "kubernetes-vpc"
}
}
The aws_default_vpc is indeed not a valid data source. But the aws_vpc data source does have a boolean default you can use to choose the default vpc:
data "aws_vpc" "default" {
default = true
}
For completeness, I'll add that an aws_default_vpc resource exists that also manages the default VPC and implements the resource life-cycle without really creating the VPC* but would make changes in the resource like changing tags (and that includes its name).
* Unless you forcefully destroy the default VPC
From the docs:
This is an advanced resource and has special caveats to be aware of when using it. Please read this document in its entirety before using this resource.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/default_vpc
This
resource "aws_default_vpc" "default" {
}
will do.
I think this is convenient for terraform projects managing a whole AWS account, but I would advise against using it whenever multiple terraform projects are deployed in a single organization account. You should better stay with #blokje5's answer in that case.
I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html
I need two different Terraform file for different purposes. In the Second Terraform file, I have to take input from the output of First Terraform file.
In my Scenario, My first Terraform creates an AWS Security Group. Now I have to use the the ID of this Security group in my Second Terraform file.
I also want to sure that Second Terraform creation cannot initilaize before First Terraform complete. How can i achieve this?
It doesn't matter how many .tf files you are creating. Terraform first loads all the .tf files and then try to create a graph to create the resources. So you can do it like this.
resource "aws_security_group" "default" {
name = "allow_all"
description = "Allow all inbound traffic"
.
.
}
Now you can use id of this security group in another file/other module. For ex. let's use it for ec2 creation. Like.
resource "aws_instance" "web" {
ami = "${var.ami_id}"
instance_type = "t2.micro"
security_groups = ["${aws_security_group.default.id}"]
}
For more details about security group params, you can check following document.
https://www.terraform.io/docs/providers/aws/r/security_group.html
for this requirement you might want to use terraform modules with which you can have the luxury of code reuse as well as you can supply that ID of security group to as many terraform files as you want. And whenever you do terraform apply it will check for resource dependencies and execution will take place accordingly.