We have a requirement to create a ec2 module and use it to create a ec2 instances (1 or more) + ebs device/ebs volume and use the same ec2 module to create ec2 (1 or more) w/o any ebs volumes.
I tried it via conditional (count) but hitting all sorts of errors. Help!
When trying to conditionally create a resource, you can use a ternary to calculate the count parameter.
A few notes:
When using count, the aws_instance.example, aws_ebs_volume.ebs-volume-1, and aws_ebs_volume.ebs-volume-2 resources will be arrays.
When attaching the EBS volumes to the instances, since the aws_volume_attachment resources have a count, you can think of them as iterating the arrays to attach the volume to the EC2 instances.
You can use count.index to extract the correct item from the array of the EC2 instances and the two EBS volume resources. For each value of count, the block is executed once.
variable "create_ebs" {
default = false
}
variable "instance_count" {
default = "1"
}
resource "aws_instance" "example" {
count = "${var.instance_count}"
ami = "ami-1"
instance_type = "t2.micro"
subnet_id = "subnet-1" #need to have more than one subnet
}
resource "aws_ebs_volume" "ebs-volume-1" {
count = "${var.create_ebs ? var.instance_count : 0}"
availability_zone = "us-east-1a" #use az based on the subnet
size = 10
type = "standard"
}
resource "aws_ebs_volume" "ebs-volume-2" {
count = "${var.create_ebs ? var.instance_count : 0}"
availability_zone = "us-east-1a"
size = 10
type = "gp2"
}
resource "aws_volume_attachment" "ebs-volume-1-attachment" {
count = "${var.create_ebs ? var.instance_count : 0}"
device_name = "/dev/sdf${count.index}"
volume_id = "${element(aws_ebs_volume.ebs-volume-1.*.id, count.index)}"
instance_id = "${element(aws_instance.example.*.id, count.index)}"
}
resource "aws_volume_attachment" "ebs-volume-2-attachment" {
count = "${var.create_ebs ? var.instance_count : 0}"
device_name = "/dev/sdg${count.index}"
volume_id = "${element(aws_ebs_volume.ebs-volume-2.*.id, count.index)}"
instance_id = "${element(aws_instance.example.*.id, count.index)}"
}
For more info on count.index you can search for it on the Terraform interpolation page
Related
I want to spin ec2 from AMI and AMI which has 4 volumes using terraform.
Any pointers is much appreciated?
resource "aws_instance" "this" {
count = var.instance_count
ami = var.ami_id
instance_type = var.instance_type
key_name = var.key_name
iam_instance_profile = var.iam_instance_profile
disable_api_termination = var.enable_deletion_protection
user_data = var.user_data
network_interface {
network_interface_id = aws_network_interface.eth0[count.index].id
device_index = 0
}
dynamic "root_block_device" {
for_each = var.root_block_device
content {
delete_on_termination = lookup(root_block_device.value, "delete_on_termination", true)
encrypted = lookup(root_block_device.value, "encrypted", null)
iops = lookup(root_block_device.value, "iops", null)
kms_key_id = lookup(root_block_device.value, "kms_key_id", null)
volume_size = lookup(root_block_device.value, "volume_size", null)
volume_type = lookup(root_block_device.value, "volume_type", null)
}
}
Want to spin ec2 from AMI and AMI which has 4 volumes using terraform
Then you don't need to do anything. The EBS volumes are already associated to the AMI with the block device mappings. Any instance launched from that AMI will automatically be spun up with the 4 ebs volumes.
More on block device mappings:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
I have a EKS Cluster with a Node Group that is configured with launch template. All of the resources are created with Terraform.
launch_template.tf;
resource "aws_launch_template" "launch-template" {
name = var.name
update_default_version = var.update_default_version
instance_type = var.instance_type
key_name = var.key_name
block_device_mappings {
device_name = var.block_device_name
ebs {
volume_size = var.volume_size
}
}
ebs_optimized = var.ebs_optimized
monitoring {
enabled = var.monitoring_enabled
}
dynamic "tag_specifications" {
for_each = toset(var.resources_to_tag)
content {
resource_type = tag_specifications.key
tags = var.tags
}
}
}
eks_nodegroup.tf;
resource "aws_eks_node_group" "eks-nodegroup" {
cluster_name = var.cluster_name
node_group_name = var.node_group_name
node_role_arn = var.node_role_arn
subnet_ids = var.subnet_ids
labels = var.labels
tags = var.tags
scaling_config {
desired_size = var.desired_size
max_size = var.max_size
min_size = var.min_size
}
launch_template {
id = var.launch_template_id
version = var.launch_template_version
}
}
These resources are binding each other. But at the end of the day,
this setup is creating
2 launch templates,
1 autoscaling group
2 volumes for each instance in autoscaling group.
I understood from this question that, because I'm using aws_launch_template resource with aws_eks_node_group; second launch template is being created. But I didn't understand where the second volume is coming from for each instance. One of the volumes fits my configuration which has 40 GB capacity, path is /dev/sda1 and IOPS is 120. But the second one has 20 GB capacity, path is /dev/xvda and IOPS is 100. I don't have any configuration like this in my Terraform structure.
I didn't find where is the source of the second volume. Any guidance will be highly appreciated, Thank you very much.
Your second volume is being created based on the default volume for the aws_eks_node_group. The disk_size parameter is set by default to 20 GB.
The disk_size parameter is not configurable when using a launch template. It will cause an error if configured.
I suspect you may be using a Bottlerocket AMI which comes with two volumes. One is the OS volume and the second is the data volume. You likely want to configure the data volume size which is exposed at /dev/xvdb by default.
See https://github.com/bottlerocket-os/bottlerocket#default-volumes
I've deployed an ELK stack to AWS ECS with terraform. All was running nicely for a few weeks, but 2 days ago I had to restart the instance.
Sadly, the new instance did not rely on the existing volume to mount the root block device. So all my elasticsearch data are no longer available to my Kibana instance.
Datas are still here, on previous volume, currently not used.
So I tried many things to get this volume attached at "dev/xvda" but without for exemple:
Use ebs_block_device instead of root_block_device using
Swap "dev/xvda" when instance is already running
I am using an aws_autoscaling_group with an aws_launch_configuration.
resource "aws_launch_configuration" "XXX" {
name = "XXX"
image_id = data.aws_ami.latest_ecs.id
instance_type = var.INSTANCE_TYPE
security_groups = [var.SECURITY_GROUP_ID]
associate_public_ip_address = true
iam_instance_profile = "XXXXXX"
spot_price = "0.04"
lifecycle {
create_before_destroy = true
}
user_data = templatefile("${path.module}/ecs_agent_conf_options.tmpl",
{
cluster_name = aws_ecs_cluster.XXX.name
}
)
//The volume i want to reuse was created with this configuration. I though it would
//be enough to reuse the same volume. It doesn't.
root_block_device {
delete_on_termination = false
volume_size = 50
volume_type = "gp2"
}
}
resource "aws_autoscaling_group" "YYY" {
name = "YYY"
min_size = var.MIN_INSTANCES
max_size = var.MAX_INSTANCES
desired_capacity = var.DESIRED_CAPACITY
health_check_type = "EC2"
availability_zones = ["eu-west-3b"]
launch_configuration = aws_launch_configuration.XXX.name
vpc_zone_identifier = [
var.SUBNET_1_ID,
var.SUBNET_2_ID]
}
Do I miss something obvious about this?
Sadly, you cannot attach a volume as a root volume to an instance.
What you have to do is create a custom AMI based on your volume. This involves creating a snapshot of the volume followed by construction of the AMI:
Creating a Linux AMI from a snapshot
In terraform, there is aws_ami specially for that purpose.
The following terraform script exemplifies the process in three steps:
Creation of a snapshot of a given volume
Creation of an AMI from the snapshot
Creation of an instance from the AMI
provider "aws" {
# your data
}
resource "aws_ebs_snapshot" "snapshot" {
volume_id = "vol-0ff4363a40eb3357c" # <-- your EBS volume ID
}
resource "aws_ami" "my" {
name = "my-custom-ami"
virtualization_type = "hvm"
root_device_name = "/dev/xvda"
ebs_block_device {
device_name = "/dev/xvda"
snapshot_id = aws_ebs_snapshot.snapshot.id
volume_type = "gp2"
}
}
resource "aws_instance" "web" {
ami = aws_ami.my.id
instance_type = "t2.micro"
# key_name = "<your-key-name>"
tags = {
Name = "InstanceFromCustomAMI"
}
}
I am creating a terraform configuration to allow user to input the number of AWS EBS volumes they want to attach to the EC2 instance.
variable "number_of_ebs" {}
resource "aws_volume_attachment" "ebs_att" {
count = "${var.number_of_ebs}"
device_name= "/dev/sdh"
volume_id = "${element(aws_ebs_volume.newVolume.*.id, count.index)}"
instance_id = "${aws_instance.web.id}"
}
resource "aws_instance" "web" {
ami = "ami-14c5486b"
instance_type = "t2.micro"
availability_zone = "us-east-1a"
vpc_security_group_ids=["${aws_security_group.instance.id}"]
key_name="KeyPairVirginia"
tags {
Name = "HelloWorld"
}
}
resource "aws_ebs_volume" "newVolume" {
count = "${var.number_of_ebs}"
name = "${format("vol-%02d", count.index + 1)}"
availability_zone = "us-east-1a"
size = 4
type="standard"
tags {
Name = "HelloWorld"
}
}
It surely is giving error. I am unaware of how to dynamically assign different name to each volume that is created and get volume_id to the attach to the instance.
Below is the error that I get.
var.number_of_ebs
Enter a value: 2
Error: aws_ebs_volume.newVolume[0]: : invalid or unknown key: name
Error: aws_ebs_volume.newVolume[1]: : invalid or unknown key: name
If you check the docs for resource aws_ebs_volume, you see that the argument name is not supported.
This explains the error message.
Is it possible with terraform to get the volume ID from the aws_instance ebs_block_device resource, or would we need to explicitly call the aws_ebs_volume/aws_volume_attachment resources?
What I currently have is:
resource "aws_instance" "ec2_app" {
...
ebs_block_device {
device_name = "${var.app_ebs_device_name}"
volume_type = "${var.app_ebs_vol_type}"
volume_size = "${var.app_ebs_vol_size}"
delete_on_termination = "${var.app_ebs_delete_on_termination}"
encrypted = "${var.app_ebs_encrypted}"
}
...
}
I know I can change to aws_ebs_volume/aws_volume_attachment resources, but I believe that would destroy and recreate the volume (which I am trying to avoid).
The docs are a little bit misleading on this point, but you can get the volume id of the ebs_block_device like this:
"${lookup(aws_instance.example.ebs_block_device[0], "volume_id")}"
Let's assume you have created a volume as follow:
resource "aws_ebs_volume" "ebs-volume-1" {
availability_zone = "eu-west-1a"
size = 8
type = "gp2"
tags {
Name = "extra volume data"
}
You can get the volume ID by specifying
volume_id = "${aws_ebs_volume.ebs-volume-1.id}"