How to prioritize terraform execution priority - amazon-web-services

After rds and elastic cache are created in terraform,
I would like to adjust the priority so that ec2 is set up.
Is this feasible with terraform?
to be precise, I am running docker on ec2. I would like to pass the endpoint of elastic cache, RDS created by terraform to docker with environment variables.
Thank you for reading my question.

It is feasible with terraform's Implicit and Explicit Dependencies.
So, you can define which resource should be created first and which one is after.
It is supported by the following construction, which takes list of resources:
depends_on = [
"", "",
]
Here is an example:
resource "aws_db_instance" "rds_example" {
allocated_storage = 10
storage_type = "gp2"
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t1.micro"
name = "mydb"
username = "foo"
password = "bar"
db_subnet_group_name = "my_database_subnet_group"
parameter_group_name = "default.mysql5.6"
}
resource "aws_instance" "ec2_example" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
tags {
Name = "HelloWorld"
}
depends_on = [
"aws_db_instance.rds_example",
]
}

Related

terraform to launch and configure EC2 AD

I have 2 windows server AMIs :
the first one acts as a client (since as per my research, there is no windows 10 enterprise AMIs in AWS) and the second AMI has an AD.
I would like to create a terraform script that automatically creates EC2 instances from the AMIs, the script should also configure AD with domain and then make the windows member of domain.
Is that possible? if not, would that be possible to achieve using a user data script?
It's not very clear what exactly you need but the 2 resources you will need for sure are:
1.The ec2 resource is aws_instance and is configured like:
resource "aws_instance" "my-ec2" {
ami = "ami-058b1b7fe545997ae"
instance_type = "t2.micro"
subnet_id = "your-subnet-id"
availability_zone = "your-availability_zone"
vpc_security_group_ids = "your-security-group"
tags = {
Name = "me"
Environment = "dev"
}
}
The AD resource is aws_directory_service_directory
resource "aws_directory_service_directory" "my-ad" {
name = "corp.notexample.com"
password = "SuperSecretPassw0rd"
size = "Small"
vpc_settings {
vpc_id = "my-vpc-id"
subnet_ids = [aws_subnet.foo.id, aws_subnet.bar.id]
}
connect_settings {
customer_dns_ips = ["A.B.C.D"]
customer_username = "Admin"
subnet_ids = [aws_subnet.foo.id, aws_subnet.bar.id]
vpc_id = "my-vpc-id"
}
tags = {
Name = "me"
Environment = "dev"
}
}

Terraform - volume_tags and newly attached EBS

Currently I have a module that is used as a template to create a lots of EC2 in AWS. So using this template with volume_tags, I should expect that for all the EBS volumes created along with the EC2 would got the same tags.
However, the issue is that after I created the EC2 using this Terraform script, in some occasion I'll need to mount a few more EBS volume to the EC2, and those volumes will got different tags (e.g. the Name tag is volume_123).
After mounting the volume to the EC2 in AWS web console, I try to run terraform init and terraform plan again, and it tells me that there are changes need to apply, as the volume_tags of the EC2 created appear 'replaced' the original Name volume_tags. Example output would be this:
#module.ec2_2.aws_instance.ec2 will be updated in-place
~ resource "aws_instance" "ec2" {
id = "i-99999999999999999"
~ volume_tags = {
~ "Name" = "volume_123" -> "ec22"
}
}
When I was reading the documentation of Terraform provider aws, I understand that the volume_tags should only apply when the instance is created. However, it seems that even after creation it will still try to align the tags of EBS volume attached to the EC2. As I need to keep those newly attached volume with a different set of tags then the root and EBS volume attached when the EC2 is created (different AMI has different number of block devices), is that I should avoid using volume_tags to give the volumes tag at creation? And if not using it what should I do instead?
The following are the codes:
terraform_folder/modules/ec2_template/main.tf
resource "aws_instance" "ec2" {
ami = var.ami
availability_zone = var.availability_zone
instance_type = var.instance_type
tags = merge(map("Name", var.name), var.tags)
volume_tags = merge(map("Name", var.name), var.tags)
}
terraform_folder/deployment/machines.tf
module "ec2_1" {
source = "../modules/ec2_template"
name = "ec21"
ami = local.ec2_ami_1["a"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
module "ec2_2" {
source = "../modules/ec2_template"
name = "ec22"
ami = local.ec2_ami_2["b"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
module "ec2_3" {
source = "../modules/ec2_template"
name = "ec23"
ami = local.ec2_ami_1["a"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
terraform_folder/deployment/locals.tf
locals {
ec2_ami_1 = {
a = "ami-11111111111111111"
b = "ami-22222222222222222"
}
ec2_ami_2 = {
a = "ami-33333333333333333"
b = "ami-44444444444444444"
}
ec2_ami_3 = {
a = "ami-55555555555555555"
b = "ami-66666666666666666"
}
tags_default = {
Terraform = "true"
Environment = "test"
Application = "app"
BackupFrequency = "2"
}
}
You shouldn't be modifying resources managed by TF manually using AWS Console. This lead to resource drift and issues you are experiencing.
Nevertheless, you can use lifecycle Meta-Argument to tell TF to ignore changes to your tags:
resource "aws_instance" "ec2" {
ami = var.ami
availability_zone = var.availability_zone
instance_type = var.instance_type
tags = merge(map("Name", var.name), var.tags)
volume_tags = merge(map("Name", var.name), var.tags)
lifecycle {
ignore_changes = [volume_tags]
}
}

Terraform AWS : Couldn't reuse previously created root_block_device with AWS EC2 instance launched with aws_launch_configuration

I've deployed an ELK stack to AWS ECS with terraform. All was running nicely for a few weeks, but 2 days ago I had to restart the instance.
Sadly, the new instance did not rely on the existing volume to mount the root block device. So all my elasticsearch data are no longer available to my Kibana instance.
Datas are still here, on previous volume, currently not used.
So I tried many things to get this volume attached at "dev/xvda" but without for exemple:
Use ebs_block_device instead of root_block_device using
Swap "dev/xvda" when instance is already running
I am using an aws_autoscaling_group with an aws_launch_configuration.
resource "aws_launch_configuration" "XXX" {
name = "XXX"
image_id = data.aws_ami.latest_ecs.id
instance_type = var.INSTANCE_TYPE
security_groups = [var.SECURITY_GROUP_ID]
associate_public_ip_address = true
iam_instance_profile = "XXXXXX"
spot_price = "0.04"
lifecycle {
create_before_destroy = true
}
user_data = templatefile("${path.module}/ecs_agent_conf_options.tmpl",
{
cluster_name = aws_ecs_cluster.XXX.name
}
)
//The volume i want to reuse was created with this configuration. I though it would
//be enough to reuse the same volume. It doesn't.
root_block_device {
delete_on_termination = false
volume_size = 50
volume_type = "gp2"
}
}
resource "aws_autoscaling_group" "YYY" {
name = "YYY"
min_size = var.MIN_INSTANCES
max_size = var.MAX_INSTANCES
desired_capacity = var.DESIRED_CAPACITY
health_check_type = "EC2"
availability_zones = ["eu-west-3b"]
launch_configuration = aws_launch_configuration.XXX.name
vpc_zone_identifier = [
var.SUBNET_1_ID,
var.SUBNET_2_ID]
}
Do I miss something obvious about this?
Sadly, you cannot attach a volume as a root volume to an instance.
What you have to do is create a custom AMI based on your volume. This involves creating a snapshot of the volume followed by construction of the AMI:
Creating a Linux AMI from a snapshot
In terraform, there is aws_ami specially for that purpose.
The following terraform script exemplifies the process in three steps:
Creation of a snapshot of a given volume
Creation of an AMI from the snapshot
Creation of an instance from the AMI
provider "aws" {
# your data
}
resource "aws_ebs_snapshot" "snapshot" {
volume_id = "vol-0ff4363a40eb3357c" # <-- your EBS volume ID
}
resource "aws_ami" "my" {
name = "my-custom-ami"
virtualization_type = "hvm"
root_device_name = "/dev/xvda"
ebs_block_device {
device_name = "/dev/xvda"
snapshot_id = aws_ebs_snapshot.snapshot.id
volume_type = "gp2"
}
}
resource "aws_instance" "web" {
ami = aws_ami.my.id
instance_type = "t2.micro"
# key_name = "<your-key-name>"
tags = {
Name = "InstanceFromCustomAMI"
}
}

Terraform error creating subnet dependency

I'm trying to get a documentdb cluster up and running and have it running from within a private subnet I have created.
Running the config below without the depends_on i get the following error message as the subnet hasn't been created:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 59b75d23-50a4-42f9-99a3-367af58e6e16
Added the depends on setup to wait for the subnet to be created but are running into an issue.
cluster_identifier = "my-docdb-cluster"
engine = "docdb"
master_username = "myusername"
master_password = "mypassword"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = true
apply_immediately = true
db_subnet_group_name = aws_subnet.eu-west-3a-private
depends_on = [aws_subnet.eu-west-3a-private]
}
On running terraform apply I an getting an error on the config:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 8b992d86-eb7f-427e-8f69-d05cc13d5b2d
on main.tf line 230, in resource "aws_docdb_cluster" "docdb":
230: resource "aws_docdb_cluster" "docdb"
A DB subnet group is a logical resource in itself that tells AWS where it may schedule a database instance in a VPC. It is not referring to the subnets directly which is what you're trying to do there.
To create a DB subnet group you should use the aws_db_subnet_group resource. You then refer to it by name directly when creating database instances or clusters.
A basic example would look like this:
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "eu-west-3a" {
vpc_id = aws_vpc.example.id
availability_zone = "a"
cidr_block = "10.0.1.0/24"
tags = {
AZ = "a"
}
}
resource "aws_subnet" "eu-west-3b" {
vpc_id = aws_vpc.example.id
availability_zone = "b"
cidr_block = "10.0.2.0/24"
tags = {
AZ = "b"
}
}
resource "aws_db_subnet_group" "example" {
name = "main"
subnet_ids = [
aws_subnet.eu-west-3a.id,
aws_subnet.eu-west-3b.id
]
tags = {
Name = "My DB subnet group"
}
}
resource "aws_db_instance" "example" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
db_subnet_group_name = aws_db_subnet_group.example.name
}
The same thing applies to Elasticache subnet groups which use the aws_elasticache_subnet_group resource.
It's also worth noting that adding depends_on to a resource that already references the dependent resource via interpolation does nothing. The depends_on meta parameter is for resources that don't expose a parameter that would provide this dependency information directly only.
It seems value in parameter is wrong. db_subnet_group_name created somewhere else gives the output id/arn. So u need to use id value. although depends_on clause looks okie.
db_subnet_group_name = aws_db_subnet_group.eu-west-3a-private.id
So that would be correct/You can try to use arn in place of id.
Thanks,
Ashish

Terraform Spot Instance inside VPC

I'm trying to launch a spot instance inside a VPC using Terraform.
I had a working aws_instance setup, and just changed it to aws_spot_instance_request, but I always get this error:
* aws_spot_instance_request.machine: Error requesting spot instances: InvalidParameterCombination: VPC security groups may not be used for a non-VPC launch
status code: 400, request id: []
My .tf file looks like this:
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "template_file" "userdata" {
filename = "${var.userdata}"
vars {
domain = "${var.domain}"
name = "${var.name}"
}
}
resource "aws_spot_instance_request" "machine" {
ami = "${var.amiPuppet}"
key_name = "${var.key}"
instance_type = "c3.4xlarge"
subnet_id = "${var.subnet}"
vpc_security_group_ids = [ "${var.securityGroup}" ]
user_data = "${template_file.userdata.rendered}"
wait_for_fulfillment = true
spot_price = "${var.price}"
tags {
Name = "${var.name}.${var.domain}"
Provider = "Terraform"
}
}
resource "aws_route53_record" "machine" {
zone_id = "${var.route53ZoneId}"
name = "${aws_spot_instance_request.machine.tags.Name}"
type = "A"
ttl = "300"
records = ["${aws_spot_instance_request.machine.private_ip}"]
}
I don't understand why it isn't working...
The documentation stands that spot_instance_request supports all parameters of aws_instance, so, I just changed a working aws_instance to spot_instance_request (with the addition of the price)... am I doing something wrong?
I originally opened this as an issue in Terraform repo, but no one replied me.
It's a bug in terraform, seems to be fixed in master.
https://github.com/hashicorp/terraform/issues/1339