Rebuild existing EC2 instance from snapshot? - amazon-web-services

I have an existing linux EC2 instance with a corrupted root volume. I have a snapshot of the root that is not corrupted. Is it possible with terraform to rebuild the instance based on the snapshot ID of the snapshot ?

Of course it is possible, this simple configuration should do the job:
resource "aws_ami" "aws_ami_name" {
name = "aws_ami_name"
virtualization_type = "hvm"
root_device_name = "/dev/sda1"
ebs_block_device {
snapshot_id = "snapshot_ID”
device_name = "/dev/sda1"
volume_type = "gp2"
}
}
resource "aws_instance" "ec2_name" {
ami = "${aws_ami.aws_ami_name.id}"
instance_type = "t3.large"
}

It's not really a Terraform-type task, since you're not deploying new infrastructure.
Instead, do it manually:
Create a new EBS Volume from the Snapshot
Stop the instance
Detach the existing root volume (make a note of the device identifier such as /dev/sda1)
Attach the new Volume with the same identifier
Start the instance

Related

Terraform launch template creating two volumes for AWS EKS Cluster Autoscaling Group

I have a EKS Cluster with a Node Group that is configured with launch template. All of the resources are created with Terraform.
launch_template.tf;
resource "aws_launch_template" "launch-template" {
name = var.name
update_default_version = var.update_default_version
instance_type = var.instance_type
key_name = var.key_name
block_device_mappings {
device_name = var.block_device_name
ebs {
volume_size = var.volume_size
}
}
ebs_optimized = var.ebs_optimized
monitoring {
enabled = var.monitoring_enabled
}
dynamic "tag_specifications" {
for_each = toset(var.resources_to_tag)
content {
resource_type = tag_specifications.key
tags = var.tags
}
}
}
eks_nodegroup.tf;
resource "aws_eks_node_group" "eks-nodegroup" {
cluster_name = var.cluster_name
node_group_name = var.node_group_name
node_role_arn = var.node_role_arn
subnet_ids = var.subnet_ids
labels = var.labels
tags = var.tags
scaling_config {
desired_size = var.desired_size
max_size = var.max_size
min_size = var.min_size
}
launch_template {
id = var.launch_template_id
version = var.launch_template_version
}
}
These resources are binding each other. But at the end of the day,
this setup is creating
2 launch templates,
1 autoscaling group
2 volumes for each instance in autoscaling group.
I understood from this question that, because I'm using aws_launch_template resource with aws_eks_node_group; second launch template is being created. But I didn't understand where the second volume is coming from for each instance. One of the volumes fits my configuration which has 40 GB capacity, path is /dev/sda1 and IOPS is 120. But the second one has 20 GB capacity, path is /dev/xvda and IOPS is 100. I don't have any configuration like this in my Terraform structure.
I didn't find where is the source of the second volume. Any guidance will be highly appreciated, Thank you very much.
Your second volume is being created based on the default volume for the aws_eks_node_group. The disk_size parameter is set by default to 20 GB.
The disk_size parameter is not configurable when using a launch template. It will cause an error if configured.
I suspect you may be using a Bottlerocket AMI which comes with two volumes. One is the OS volume and the second is the data volume. You likely want to configure the data volume size which is exposed at /dev/xvdb by default.
See https://github.com/bottlerocket-os/bottlerocket#default-volumes

Terraform AWS : Couldn't reuse previously created root_block_device with AWS EC2 instance launched with aws_launch_configuration

I've deployed an ELK stack to AWS ECS with terraform. All was running nicely for a few weeks, but 2 days ago I had to restart the instance.
Sadly, the new instance did not rely on the existing volume to mount the root block device. So all my elasticsearch data are no longer available to my Kibana instance.
Datas are still here, on previous volume, currently not used.
So I tried many things to get this volume attached at "dev/xvda" but without for exemple:
Use ebs_block_device instead of root_block_device using
Swap "dev/xvda" when instance is already running
I am using an aws_autoscaling_group with an aws_launch_configuration.
resource "aws_launch_configuration" "XXX" {
name = "XXX"
image_id = data.aws_ami.latest_ecs.id
instance_type = var.INSTANCE_TYPE
security_groups = [var.SECURITY_GROUP_ID]
associate_public_ip_address = true
iam_instance_profile = "XXXXXX"
spot_price = "0.04"
lifecycle {
create_before_destroy = true
}
user_data = templatefile("${path.module}/ecs_agent_conf_options.tmpl",
{
cluster_name = aws_ecs_cluster.XXX.name
}
)
//The volume i want to reuse was created with this configuration. I though it would
//be enough to reuse the same volume. It doesn't.
root_block_device {
delete_on_termination = false
volume_size = 50
volume_type = "gp2"
}
}
resource "aws_autoscaling_group" "YYY" {
name = "YYY"
min_size = var.MIN_INSTANCES
max_size = var.MAX_INSTANCES
desired_capacity = var.DESIRED_CAPACITY
health_check_type = "EC2"
availability_zones = ["eu-west-3b"]
launch_configuration = aws_launch_configuration.XXX.name
vpc_zone_identifier = [
var.SUBNET_1_ID,
var.SUBNET_2_ID]
}
Do I miss something obvious about this?
Sadly, you cannot attach a volume as a root volume to an instance.
What you have to do is create a custom AMI based on your volume. This involves creating a snapshot of the volume followed by construction of the AMI:
Creating a Linux AMI from a snapshot
In terraform, there is aws_ami specially for that purpose.
The following terraform script exemplifies the process in three steps:
Creation of a snapshot of a given volume
Creation of an AMI from the snapshot
Creation of an instance from the AMI
provider "aws" {
# your data
}
resource "aws_ebs_snapshot" "snapshot" {
volume_id = "vol-0ff4363a40eb3357c" # <-- your EBS volume ID
}
resource "aws_ami" "my" {
name = "my-custom-ami"
virtualization_type = "hvm"
root_device_name = "/dev/xvda"
ebs_block_device {
device_name = "/dev/xvda"
snapshot_id = aws_ebs_snapshot.snapshot.id
volume_type = "gp2"
}
}
resource "aws_instance" "web" {
ami = aws_ami.my.id
instance_type = "t2.micro"
# key_name = "<your-key-name>"
tags = {
Name = "InstanceFromCustomAMI"
}
}

How does terraform handle mounting an AWS Elastic Block Store (EBS) with regards to partitioning?

Sample terraform code snippet for mounting an EBS (just for context):
resource "aws_ebs_volume" "ebs-volume-1" {
availability_zone = "eu-central-1a"
size = 20
type = "gp2"
tags = {
Name = "extra volume data"
}
}
resource "aws_volume_attachment" "ebs-volume-1-attachment" {
device_name = "/dev/xvdh"
volume_id = aws_ebs_volume.ebs-volume-1.id
instance_id = aws_instance.example.id
}
I have read through the terraform documentation on attributes it has for mounting EBS here; however, it is not clear to me how does that block device gets partitioned (or if partitioning here is relevant at all.
Explanation of what is EBS: here
The "aws_ebs_volume" resource will create the EBS volume, the "aws_volume_attachment" puts it on an EC2 instance.
Keep in mind, you can do this in an "aws_instance" resource as well.

rebuilding a terrafrom aws instnace with persisted data

I have a terraform script where I launch a cluster of ec2 instances and combine them together (specifically for Influx Db). Here is the relevant part of the script:
resource "aws_instance" "influxdata" {
count = "${var.ec2-count-influx-data}"
ami = "${module.amis.rhel73_id}"
instance_type = "${var.ec2-type-influx-data}"
vpc_security_group_ids = ["${var.sg-ids}"]
subnet_id = "${element(module.infra.subnet,count.index)}"
key_name = "${var.KeyName}"
tags {
Name = "influx-data-node"
System = "${module.infra.System}"
Environment = "${module.infra.Environment}"
OwnerContact = "${module.infra.OwnerContact}"
Owner = "${var.Owner}"
}
ebs_block_device {
device_name = "/dev/sdg"
volume_size = 750
volume_type = "io1"
iops = 3000
encrypted = true
delete_on_termination = false
}
user_data = "${file("terraform/attach_ebs.sh")}"
connection {
//private_key = "${file("/Users/key_CD.pem")}" #dev env
//private_key = "${file("/Users/influx.pem")}" #qa env west
private_key = "${file("/Users/influx_east.pem")}" #qa env east
user = "ec2-user"
}
provisioner "remote-exec" {
inline = ["echo just checking for ssh. ttyl. bye."]
}
}
What I'm now trying to do...is taint one instance and then have terraform rebuild it but...what I want it to do is to unmount ebs, detach ebs, rebuild instance, attach ebs, mount ebs.
When I do a terraform taint module=instance it does taint it and then when I go to apply the change, it creates a whole new instance and new ebs volume instead of reattaching the previous one back on the new instance.
I'm also ok with some data loss as this is part of a cluster so when the node gets rebuilt...it should just sync up with the other nodes....
How can one do this with Terraform?
Create a snapshot of the instance you want to taint. Then change the ec2 resource's ebs volume to have the snapshot ID param of the previous snapshot. https://www.terraform.io/docs/providers/aws/r/instance.html#snapshot_id
Since you are not concerned about data lose, I would want to think that having an ec2 instance that is in same state as before the rebuild but without the data is OK.
if this is the case, I will use user-data to automatically mount the newly attached volume after the rebuild. in that case after the rebuild, the ec2 instance will be in same state (with volume formatted and attached) and ready to sync with other nodes in the cluster for data without any additional effort. The script should look as below:
#!/bin/bash
DEVICE_FS=`blkid -o value -s TYPE /dev/xvdh`
if [ "`echo -n $DEVICE_FS`" == "" ] ; then
mkfs.ext4 /dev/xvdh
fi
mkdir -p /data
echo '/dev/xvdh /data ext4 defaults 0 0' >> /etc/fstab
mount /data

How to remove an AWS Instance volume using Terraform

I deploy a CentOS 7 using an AMI that automatically creates a volume on AWS, so when I remove the platform using the next Terraform commands:
terraform plan -destroy -var-file terraform.tfvars -out terraform.tfplan
terraform apply terraform.tfplan
The volume doesn't remove because it was created automatically with the AMI and terraform doesn't create it. Is it possible to remove with terraform?
My AWS instance is created with the next terraform code:
resource "aws_instance" "DCOS-master1" {
ami = "${var.aws_centos_ami}"
availability_zone = "eu-west-1b"
instance_type = "t2.medium"
key_name = "${var.aws_key_name}"
security_groups = ["${aws_security_group.bastion.id}"]
associate_public_ip_address = true
private_ip = "10.0.0.11"
source_dest_check = false
subnet_id = "${aws_subnet.eu-west-1b-public.id}"
tags {
Name = "master1"
}
}
I add the next code to get information about the EBS volume and to take its ID:
data "aws_ebs_volume" "ebs_volume" {
most_recent = true
filter {
name = "attachment.instance-id"
values = ["${aws_instance.DCOS-master1.id}"]
}
}
output "ebs_volume_id" {
value = "${data.aws_ebs_volume.ebs_volume.id}"
}
Then having the EBS volume ID I import to the terraform plan using:
terraform import aws_ebs_volume.data volume-ID
Finally when I run terraform destroy all the instances and volumes are destroyed.
if the EBS is protected you need to manually remove the termination protection first on the console then you can destroy it