How to remove an AWS Instance volume using Terraform - amazon-web-services

I deploy a CentOS 7 using an AMI that automatically creates a volume on AWS, so when I remove the platform using the next Terraform commands:
terraform plan -destroy -var-file terraform.tfvars -out terraform.tfplan
terraform apply terraform.tfplan
The volume doesn't remove because it was created automatically with the AMI and terraform doesn't create it. Is it possible to remove with terraform?
My AWS instance is created with the next terraform code:
resource "aws_instance" "DCOS-master1" {
ami = "${var.aws_centos_ami}"
availability_zone = "eu-west-1b"
instance_type = "t2.medium"
key_name = "${var.aws_key_name}"
security_groups = ["${aws_security_group.bastion.id}"]
associate_public_ip_address = true
private_ip = "10.0.0.11"
source_dest_check = false
subnet_id = "${aws_subnet.eu-west-1b-public.id}"
tags {
Name = "master1"
}
}

I add the next code to get information about the EBS volume and to take its ID:
data "aws_ebs_volume" "ebs_volume" {
most_recent = true
filter {
name = "attachment.instance-id"
values = ["${aws_instance.DCOS-master1.id}"]
}
}
output "ebs_volume_id" {
value = "${data.aws_ebs_volume.ebs_volume.id}"
}
Then having the EBS volume ID I import to the terraform plan using:
terraform import aws_ebs_volume.data volume-ID
Finally when I run terraform destroy all the instances and volumes are destroyed.

if the EBS is protected you need to manually remove the termination protection first on the console then you can destroy it

Related

Rebuild existing EC2 instance from snapshot?

I have an existing linux EC2 instance with a corrupted root volume. I have a snapshot of the root that is not corrupted. Is it possible with terraform to rebuild the instance based on the snapshot ID of the snapshot ?
Of course it is possible, this simple configuration should do the job:
resource "aws_ami" "aws_ami_name" {
name = "aws_ami_name"
virtualization_type = "hvm"
root_device_name = "/dev/sda1"
ebs_block_device {
snapshot_id = "snapshot_ID”
device_name = "/dev/sda1"
volume_type = "gp2"
}
}
resource "aws_instance" "ec2_name" {
ami = "${aws_ami.aws_ami_name.id}"
instance_type = "t3.large"
}
It's not really a Terraform-type task, since you're not deploying new infrastructure.
Instead, do it manually:
Create a new EBS Volume from the Snapshot
Stop the instance
Detach the existing root volume (make a note of the device identifier such as /dev/sda1)
Attach the new Volume with the same identifier
Start the instance

How to create a temporary instance for a custom AMI creation in AWS with terraform?

Im trying to create a Custom AMI for my AWS Deployment with terraform. Its working quite good also its possible to run a bash script. Problem is it's not possible to create the instance temporary and then to terminate the ec2 instance with terraform and all the depending resources.
First im building an "aws_instance" than I provide a bash script in my /tmp folder and let this be done via ssh connection in the terraform script. Looking like the following:
Fist the aws_instance is created based on a standard AWS Amazon Machine Image (AMI). This is used to later create an image from it.
resource "aws_instance" "custom_ami_image" {
tags = { Name = "custom_ami_image" }
ami = var.ami_id //base custom ami id
subnet_id = var.subnet_id
vpc_security_group_ids = [var.security_group_id]
iam_instance_profile = "ec2-instance-profile"
instance_type = "t2.micro"
ebs_block_device {
//...further configurations
}
Now a bash script is provided. The source is the location of the bash script on the local linux box you are executing terraform from. The destination is on the new AWS instance. In the file I install further stuff like python3, oracle drivers and so on...
provisioner "file" {
source = "../bash_file"
destination = "/tmp/bash_file"
}
Then I'll change the permissions on the bash script and execute it with a ssh-user:
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bash_file",
"sudo /tmp/bash_file",
]
}
No you can login to the ssh-user with the previous created key.
connection {
type = "ssh"
user = "ssh-user"
password = ""
private_key = file("${var.key_name}.pem")
host = self.private_ip
}
}
With the aws_ami_from_instance the ami can be modelled with the current created EC2 instance. And now is callable for further deployments, its also possible to share it in to further aws accounts.
resource "aws_ami_from_instance" "custom_ami_image {
name = "acustom_ami_image"
source_instance_id = aws_instance.custom_ami_image.id
}
Its working fine, but what bothers me is the resulting ec2 instance! Its running and its not possible to terminate it with terraform? Does anyone have an idea how I can handle this? Sure, the running costs are manageable, but I don't like creating datagarbage....
The best way to create AMI images i think is using Packer, also from Hashicorp like Terraform.
What is Packer?
Provision Infrastructure with Packer Packer is HashiCorp's open-source tool for creating machine images from source
configuration. You can configure Packer images with an operating
system and software for your specific use-case.
Packer creates an temporary instance with temporary keypair, security_group and IAM roles. In the provisioner "shell" are custom inline commands possible. Afterwards you can use this ami with your terraform code.
A sample script could look like this:
packer {
required_plugins {
amazon = {
version = ">= 0.0.2"
source = "github.com/hashicorp/amazon"
}
}
}
source "amazon-ebs" "linux" {
# AMI Settings
ami_name = "ami-oracle-python3"
instance_type = "t2.micro"
source_ami = "ami-xxxxxxxx"
ssh_username = "ec2-user"
associate_public_ip_address = false
ami_virtualization_type = "hvm"
subnet_id = "subnet-xxxxxx"
launch_block_device_mappings {
device_name = "/dev/xvda"
volume_size = 8
volume_type = "gp2"
delete_on_termination = true
encrypted = false
}
# Profile Settings
profile = "xxxxxx"
region = "eu-central-1"
}
build {
sources = [
"source.amazon-ebs.linux"
]
provisioner "shell" {
inline = [
"export no_proxy=localhost"
]
}
}
You can find documentation about packer here.

Terraform - volume_tags and newly attached EBS

Currently I have a module that is used as a template to create a lots of EC2 in AWS. So using this template with volume_tags, I should expect that for all the EBS volumes created along with the EC2 would got the same tags.
However, the issue is that after I created the EC2 using this Terraform script, in some occasion I'll need to mount a few more EBS volume to the EC2, and those volumes will got different tags (e.g. the Name tag is volume_123).
After mounting the volume to the EC2 in AWS web console, I try to run terraform init and terraform plan again, and it tells me that there are changes need to apply, as the volume_tags of the EC2 created appear 'replaced' the original Name volume_tags. Example output would be this:
#module.ec2_2.aws_instance.ec2 will be updated in-place
~ resource "aws_instance" "ec2" {
id = "i-99999999999999999"
~ volume_tags = {
~ "Name" = "volume_123" -> "ec22"
}
}
When I was reading the documentation of Terraform provider aws, I understand that the volume_tags should only apply when the instance is created. However, it seems that even after creation it will still try to align the tags of EBS volume attached to the EC2. As I need to keep those newly attached volume with a different set of tags then the root and EBS volume attached when the EC2 is created (different AMI has different number of block devices), is that I should avoid using volume_tags to give the volumes tag at creation? And if not using it what should I do instead?
The following are the codes:
terraform_folder/modules/ec2_template/main.tf
resource "aws_instance" "ec2" {
ami = var.ami
availability_zone = var.availability_zone
instance_type = var.instance_type
tags = merge(map("Name", var.name), var.tags)
volume_tags = merge(map("Name", var.name), var.tags)
}
terraform_folder/deployment/machines.tf
module "ec2_1" {
source = "../modules/ec2_template"
name = "ec21"
ami = local.ec2_ami_1["a"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
module "ec2_2" {
source = "../modules/ec2_template"
name = "ec22"
ami = local.ec2_ami_2["b"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
module "ec2_3" {
source = "../modules/ec2_template"
name = "ec23"
ami = local.ec2_ami_1["a"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
terraform_folder/deployment/locals.tf
locals {
ec2_ami_1 = {
a = "ami-11111111111111111"
b = "ami-22222222222222222"
}
ec2_ami_2 = {
a = "ami-33333333333333333"
b = "ami-44444444444444444"
}
ec2_ami_3 = {
a = "ami-55555555555555555"
b = "ami-66666666666666666"
}
tags_default = {
Terraform = "true"
Environment = "test"
Application = "app"
BackupFrequency = "2"
}
}
You shouldn't be modifying resources managed by TF manually using AWS Console. This lead to resource drift and issues you are experiencing.
Nevertheless, you can use lifecycle Meta-Argument to tell TF to ignore changes to your tags:
resource "aws_instance" "ec2" {
ami = var.ami
availability_zone = var.availability_zone
instance_type = var.instance_type
tags = merge(map("Name", var.name), var.tags)
volume_tags = merge(map("Name", var.name), var.tags)
lifecycle {
ignore_changes = [volume_tags]
}
}

Terraform AWS : Couldn't reuse previously created root_block_device with AWS EC2 instance launched with aws_launch_configuration

I've deployed an ELK stack to AWS ECS with terraform. All was running nicely for a few weeks, but 2 days ago I had to restart the instance.
Sadly, the new instance did not rely on the existing volume to mount the root block device. So all my elasticsearch data are no longer available to my Kibana instance.
Datas are still here, on previous volume, currently not used.
So I tried many things to get this volume attached at "dev/xvda" but without for exemple:
Use ebs_block_device instead of root_block_device using
Swap "dev/xvda" when instance is already running
I am using an aws_autoscaling_group with an aws_launch_configuration.
resource "aws_launch_configuration" "XXX" {
name = "XXX"
image_id = data.aws_ami.latest_ecs.id
instance_type = var.INSTANCE_TYPE
security_groups = [var.SECURITY_GROUP_ID]
associate_public_ip_address = true
iam_instance_profile = "XXXXXX"
spot_price = "0.04"
lifecycle {
create_before_destroy = true
}
user_data = templatefile("${path.module}/ecs_agent_conf_options.tmpl",
{
cluster_name = aws_ecs_cluster.XXX.name
}
)
//The volume i want to reuse was created with this configuration. I though it would
//be enough to reuse the same volume. It doesn't.
root_block_device {
delete_on_termination = false
volume_size = 50
volume_type = "gp2"
}
}
resource "aws_autoscaling_group" "YYY" {
name = "YYY"
min_size = var.MIN_INSTANCES
max_size = var.MAX_INSTANCES
desired_capacity = var.DESIRED_CAPACITY
health_check_type = "EC2"
availability_zones = ["eu-west-3b"]
launch_configuration = aws_launch_configuration.XXX.name
vpc_zone_identifier = [
var.SUBNET_1_ID,
var.SUBNET_2_ID]
}
Do I miss something obvious about this?
Sadly, you cannot attach a volume as a root volume to an instance.
What you have to do is create a custom AMI based on your volume. This involves creating a snapshot of the volume followed by construction of the AMI:
Creating a Linux AMI from a snapshot
In terraform, there is aws_ami specially for that purpose.
The following terraform script exemplifies the process in three steps:
Creation of a snapshot of a given volume
Creation of an AMI from the snapshot
Creation of an instance from the AMI
provider "aws" {
# your data
}
resource "aws_ebs_snapshot" "snapshot" {
volume_id = "vol-0ff4363a40eb3357c" # <-- your EBS volume ID
}
resource "aws_ami" "my" {
name = "my-custom-ami"
virtualization_type = "hvm"
root_device_name = "/dev/xvda"
ebs_block_device {
device_name = "/dev/xvda"
snapshot_id = aws_ebs_snapshot.snapshot.id
volume_type = "gp2"
}
}
resource "aws_instance" "web" {
ami = aws_ami.my.id
instance_type = "t2.micro"
# key_name = "<your-key-name>"
tags = {
Name = "InstanceFromCustomAMI"
}
}

How to get pem file for AWS Autoscaling launched instance

I have a Terraform script that launches VPC, subnets, database, autoscaling and some other stuff. Autoscaling uses default Windows Server 2012 R2 images to launch new instances (including the initial ones). Every instance is executing Chef install after launch. I need to log into the instance so i can confirm that Chef is installed but i dont have any .pem keys. How do i launch an instance with Autoscaling and launch_configuration and output .pem file so i can login afterwards?
Here is my autoscaling part of the script:
resource "aws_autoscaling_group" "asgPrimary" {
depends_on = ["aws_launch_configuration.primary"]
availability_zones = ["${data.aws_availability_zones.available.names[0]}"]
name = "TerraformASGPrimary"
max_size = 1
min_size = 1
wait_for_capacity_timeout = "0"
health_check_grace_period = 300
health_check_type = "ELB"
desired_capacity = 1
force_delete = false
wait_for_capacity_timeout = "0"
vpc_zone_identifier = ["${aws_subnet.private_primary.id}"]
#placement_group = "${aws_placement_group.test.id}"
launch_configuration = "${aws_launch_configuration.primary.name}"
load_balancers = ["${aws_elb.elb.name}"]
}
and this is my launch configuration:
resource "aws_launch_configuration" "primary" {
depends_on = ["aws_subnet.primary"]
name = "web_config_primary"
image_id = "${data.aws_ami.amazon_windows_2012R2.id}"
instance_type = "${var.ami_type}"
security_groups = ["${aws_security_group.primary.id}"]
user_data = "${template_file.user_data.rendered}"
}
I need to avoid using Amazon CLI or the webpage itself - the point is all that to be automated for reusing in all my other solutions.
The .pem files used to RDS/SSH into an EC2 instance are not generated during launch of an EC2 instance. It may appear like this when using the AWS Management Console, but in actuality, the Key Pair is generated first, and then that Key Pair is assigned to the EC2 instance during launch.
To get your .pem file, first:
Generate a new Key Pair. See Amazon EC2 Key Pairs. When you do this, you will be able to download the .pem file.
Assign that Key Pair to your Auto Scaling Group's launch configuration using the key_name argument.
Here's an example:
resource "aws_launch_configuration" "primary" {
depends_on = ["aws_subnet.primary"]
name = "web_config_primary"
image_id = "${data.aws_ami.amazon_windows_2012R2.id}"
instance_type = "${var.ami_type}"
security_groups = ["${aws_security_group.primary.id}"]
user_data = "${template_file.user_data.rendered}",
key_name = "my-key-pair"
}
See: https://www.terraform.io/docs/providers/aws/r/launch_configuration.html#key_name