ok, so I am trying to attach an EBS volume which I have created using Terraform to an ASG's instance using userdata, but now issue is both are in different AZ's, due to which, it failing to attach. Below is the steps I am trying and failing:
resource "aws_ebs_volume" "this" {
for_each = var.ebs_block_device
size = lookup(each.value,"volume_size", null)
type = lookup(each.value,"volume_type", null)
iops = lookup(each.value, "iops", null)
encrypted = lookup(each.value, "volume_encrypt", null)
kms_key_id = lookup(each.value, "kms_key_id", null)
availability_zone = join(",",random_shuffle.az.result)
}
In above resource, I am using random provider to get one AZ from list of AZs, and same list is provided to ASG resource below:
resource "aws_autoscaling_group" "this" {
desired_capacity = var.desired_capacity
launch_configuration = aws_launch_configuration.this.id
max_size = var.max_size
min_size = var.min_size
name = var.name
vpc_zone_identifier = var.subnet_ids // <------ HERE
health_check_grace_period = var.health_check_grace_period
load_balancers = var.load_balancer_names
target_group_arns = var.target_group_arns
tag {
key = "Name"
value = var.name
propagate_at_launch = true
}
}
And here is userdata which I am using:
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
instanceId = curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-id
aws ec2 attach-volume --volume-id ${ebs_volume_id} --instance-id $instanceId --device /dev/nvme1n1
Above will attach the newly created volume, as I am passing output ${ebs_volume_id} of above resource.
But, its failing because instance and volume are in different AZs.
Can anyone help me on this as a better solution than hardcoding AZ on both ASG and Volume?
I'd have to understand more about what you're trying to do to solve this with just the aws provider and terraform. And honestly, most ideas are going to be a bit complex.
You could have an ASG per AZ. Otherwise, the ASG is going to select some AZ at each launch. And you'll have more instances in an AZ than you have volumes and volumes in other AZs with no instances to attach to.
So you could create a number of volumes per az and an ASG per AZ. Then the userdata should list all the volumes in the AZ that are not attached to an instance. Then pick the id of the first volume that is unattached. Then attach it. If all are attached, you should trigger your alerting because you have more instances than you have volumes.
Any attempt to do this with a single ASG is really an attempt at writing your own ASG but doing it in a way that fights with your actual ASG.
But there is a company who offers managing this as a service. They also help you manage them as spot instances to save cost: https://spot.io/
The elastigroup resource is an ASG managed by them. So you won't have an aws asg anymore. But they have some interesting stateful configurations.
We support instance persistence via the following configurations. all values are boolean. For more information on instance persistence please see: Stateful configuration
persist_root_device - (Optional) Boolean, should the instance maintain its root device volumes.
persist_block_devices - (Optional) Boolean, should the instance maintain its Data volumes.
persist_private_ip - (Optional) Boolean, should the instance maintain its private IP.
block_devices_mode - (Optional) String, determine the way we attach the data volumes to the data devices, possible values: "reattach" and "onLaunch" (default is onLaunch).
private_ips - (Optional) List of Private IPs to associate to the group instances.(e.g. "172.1.1.0"). Please note: This setting will only apply if persistence.persist_private_ip is set to true
stateful_deallocation {
should_delete_images = false
should_delete_network_interfaces = false
should_delete_volumes = false
should_delete_snapshots = false
}
This allows you to have an autoscaler that preserves volumes and handles the complexities for you.
Related
I have 3 existing EBS volumes that I am trying to attach to instances created with Autoscaling groups. Below is Terraform code on how the EBS volumes are defined:
EBS Volumes
resource "aws_ebs_volume" "volumes" {
count = "${(var.enable ? 1 : 0) * var.number_of_zones}"
availability_zone = "${element(var.azs, count.index)}"
size = "${var.volume_size}"
type = "${var.volume_type}"
lifecycle {
ignore_changes = [
"tags",
]
}
tags {
Name = "${var.cluster_name}-${count.index + 1}"
}
}
My plan is to first use the Terraform import utility so the volumes can be managed my Terraform. Without doing this import, Terraform assumes I am trying to create new EBS volumes which I do not want.
Additionally, I discovered this aws_volume_attachment resource to attach these volumes to instances. I'm struggling to determine what value to put as the instance_id in this resource:
Volume Attachment
resource "aws_volume_attachment" "volume_attachment" {
count = length("${aws_ebs_volume.volumes.id}")
device_name = "/dev/sdf"
volume_id = aws_ebs_volume.volumes.*.id
instance_id = "instance_id_from_autoscaling_group"
}
Additionally, the launch configuration block has an ebs_volume_device block, do I need anything else included in this block? Any advice on this matter would be helpful, as I am having some trouble.
ebs_block_device {
device_name = "/dev/sdf"
no_device = true
}
I'm struggling to determine what value to put as the instance_id in this resource
If you create ASG using TF, you don't have access to the instance IDs. The reason is that ASG is treated as one entity, rather then individual instances.
The only way to get the instance ids from ASG created would be through external data resource or lambda function source.
But even if you could theoretically do it, instances in ASG should be treated as fully disposable, interchangeable and identical. You should not need to customize them, as they can be terminated and replaced at any time by AWS's AZ rebalancing, instance failures or scaling activities.
Assuming I have an existing Elastic IP on my AWS account.
For reasons beyond the scope of this question, this EIP is not (and cannot) be managed via Terraform.
I know want to assign this EIP (say 11.22.33.44) to an EC2 instance I create via TF
The traditional approach would to of course create both the EIP and the EC2 instance via TF
resource "aws_eip" "my_instance_eip" {
instance = "my_instance.id"
vpc = true
}
resource "aws_eip_association" "my_eip_association" {
instance_id = "my_instance.id"
allocation_id = "aws_eip.my_instance_eip.id"
}
Is there a way however to let EC2 know via TF that it should be assigned as EIP, 11.22.33.44 that is outside of TF lifecycle?
You can use aws_eip data source to get info of your existing EIP and then use that in your aws_eip_association:
data "aws_eip" "my_instance_eip" {
public_ip = "11.22.33.44"
}
resource "aws_eip_association" "my_eip_association" {
instance_id = aws_instance.my_instance.id
allocation_id = data.aws_eip.my_instance_eip.id
}
I have created an EBS volume that I can attach to EC2 instances using Terraform, but I cannot work out how to get the EBS to connect to an EC2 created by an autoscaling group.
Code that works:
resource "aws_volume_attachment" "ebs_name" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.name.id
instance_id = aws_instance.server.id
}
Code that doesn't work:
resource "aws_volume_attachment" "ebs_name" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.name.id
instance_id = aws_launch_template.asg-nginx.id
}
What I am hoping for is an auto-scaling launch template that adds an EBS that already exists, allowing for a high-performance EBS share instead of a "we told you not to put code on there" EFS share.
Edit: I am using a multi-attach EBS. I can attach it manually to multiple ASG-created EC2 instances and it works. I just can't do it using Terraform.
Edit 2: I finally settled on a user_data entry in Terraform that ran an AWS command line bash script to attach the multi-attach EBS.
Script:
#!/bin/bash
[…aws keys here…]
aws ec2 attach-volume --device /dev/sdxx --instance-id `cat /var/lib/cloud/data/instance-id` --volume-id vol-01234567890abc
reboot
Terraform:
data "template_file" "shell-script" {
template = file("path/to/script.sh")
}
data "template_cloudinit_config" "script_sh" {
gzip = false
base64_encode = true
part {
content_type = "text/x-shellscript"
content = data.template_file.shell-script.rendered
}
}
resource "aws_launch_template" "template_name" {
[…]
user_data = data.template_cloudinit_config.mount_sh.rendered
[…]
}
The risk here is storing a user's AWS keys in a script, but as the script is never stored on the servers, it's no big deal. Anyone with access to the user_data already has access to better keys than the one you're using here keys.
This would require Terraform being executed every time a new instance is created as part of a scaling event, which would require automation to invoke.
Instead you should look at adding a lifecycle hook for your autoscaling group.
You could configure the topic to trigger an SNS notification that invokes a Lambda function to attach to your new instance.
I have successfully created an autoscaling group using Terraform. I would like to now find a way to dynamically name the provisioned instances based on index value.
For an aws_instance type, it can be easily done by:
resource "aws_instance" "bar" {
count = 3
tags {
Name = "${var.instance_name_gridNode}${count.index + 1}"
App-code = "${var.app-code}"
PC-code = "${var.pc-code}"
}
}
This will result in 3 instances named:
1) Node1
2) Node2
3) Node3
However as aws_autoscaling_group is dynamically provisioned (for both scaling in and out situations) how does one control the naming convention of the provisioned instances?
resource "aws_autoscaling_group" "gridrouter_asg" {
name = "mygridrouter"
launch_configuration = "${aws_launch_configuration.gridGgr_lcfg.id}"
min_size = 1
max_size = 2
health_check_grace_period = 150
desired_capacity = 1
vpc_zone_identifier = ["${var.subnet_id}"]
health_check_type = "EC2"
tags = [
{
key = "Name"
value = "${var.instance_name_gridGgr_auto}"
propagate_at_launch = true
},
]
}
AWS autoscaling groups can be tagged as with many resources and using the propagate_at_launch flag those tags will also be passed to the instances that it creates.
Unfortunately these are entirely static and the ASG itself has no way of tagging instances differently. On top of this the default scale in policy will not remove the newest instances first so even if you did tag your instances as Node1, Node2, Node3 then when the autoscaling group scaled in it's most likely (depending on criteria) to remove Node1 leaving you with Node2 and Node3. While it's possible to change the termination policy to NewestInstance so that it would remove Node3 this is unlikely to be an optimal scale in policy.
I'd question why you feel you need to take ASG instances differently from each other and maybe rethink about how you manage your instances when they are more ephemeral as is generally the case in modern clouds but more so when using autoscaling groups.
If you really did want to tag instances differently for some specific reason you could have the ASG not propagate the Name tag at launch to instances and then have a Lambda function trigger on the scale out event (either via a lifecycle hook or a Cloudwatch event) to determine the tag value to use and then tag the instance with it.
One hack to do this is to pass the user-data script to the instance or autoscaling-group.
PFB the link to the answer to a similar question.
https://stackoverflow.com/a/44613778/3304632
I got the following error from AWS today.
"We currently do not have sufficient m3.large capacity in the Availability Zone you requested (us-east-1a). Our system will be working on provisioning additional capacity. You can currently get m3.large capacity by not specifying an Availability Zone in your request or choosing us-east-1e, us-east-1b."
What does this mean exactly? It sounds like AWS doesn't have the physical resources to allocate me the virtual resources that I need. That seems unbelievable though.
What's the solution? Is there an easy way to change the availability zone of an instance?
Or do I need to create an AMI and restore it in a new availability zone?
This is not a new issue. You cannot change the availability zone. Best option is to create an AMI and relaunch the instance in new AZ, as you have already said. You would have everything in place. If you want to go across regions, see this - http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
You can try getting reserved instances, which guarantee you get the instances all the time.
I fixed this eror by fixing my aws_region and availability_zone values. Once I added aws_subnet_ids, error msg showed me exactly which zone my ec2 was being created.
variable "availability_zone" {
default = "ap-southeast-2c"
}
variable "aws_region" {
description = "EC2 Region for the VPC"
default = "ap-southeast-2c"
}
data "aws_vpc" "default" {
default = true
}
data "aws_subnet_ids" "all" {
vpc_id = "${data.aws_vpc.default.id}"
}
resource "aws_instance" "ec2" {
....
subnet_id = "${element(data.aws_subnet_ids.all.ids, 0)}"
availability_zone = "${var.availability_zone}"
}