Configure an AWS auto-scaling group for price in Terraform? - amazon-web-services

I have created an AMI that performs work when a machine is started using systemd. Since the work is not time-critical, I would like to optimize for cost using an AWS auto-scaling group. I am using Terraform to manage my infrastructure.
Here is what I have so far:
# ...
resource "aws_launch_template" "default" {
name = "autoscaling-launch-template"
capacity_reservation_specification {
capacity_reservation_preference = "open"
}
credit_specification {
cpu_credits = "standard"
}
iam_instance_profile {
name = aws_iam_instance_profile.default.name
}
image_id = data.aws_ami.default.id
instance_market_options {
market_type = "spot"
}
instance_type = "t2.small"
key_name = var.master_key
monitoring {
enabled = true
}
placement {
availability_zone = "us-east-1a"
}
vpc_security_group_ids = [ "${aws_security_group.default.id}" ]
tag_specifications {
resource_type = "instance"
}
user_data = base64encode(local.user_data)
}
resource "aws_autoscaling_group" "default" {
name = "my-autoscaling-group"
min_size = 1
max_size = 5
desired_capacity = 2
availability_zones = [ "us-east-1a" ]
launch_template {
id = aws_launch_template.default.id
version = "$Latest"
}
lifecycle {
create_before_destroy = true
}
}
# ...
What I would like to achieve is the following:
When the spot instance price is low, scale up to the maximum
When the spot instance price is high, scale down to the minimum
"high" and "low" price should be defined approximately using a rolling average or similar. I don't want to have to maintain minimum and maximum prices.
I always want to use t2.small.
How can I achieve this in Terraform?

The terraform registry has a nice verified module for creating autoscaling groups:
https://registry.terraform.io/modules/terraform-aws-modules/autoscaling/aws/3.4.0
You can use the spot_price variable to launch spot instances into the ASG.

Related

Launch template mtc should not specify an instance profile. The noderole in your request will be used to construct an instance profile

My launch template specifies an iam instance profile and my node group has a groupe role arn. Based on this error, I removed the iam_instance_role argument from my template resource block and it still gave me the same error
Launch template mtc should not specify an instance profile. The noderole in your request will be used to construct an instance profile."
Here's my launch template resource blocks with my instance profile included
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
instance_initiated_shutdown_behavior = "terminate"
name = var.name
user_data = base64encode("node_userdata.tpl")
# vpc_security_group_ids = var.security_group_ids
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
iam_instance_profile {
name = aws_iam_instance_profile.node.name
}
monitoring {
enabled = true
}
}
resource "aws_iam_instance_profile" "node" {
name_prefix = var.name
role = aws_iam_role.node.id
}
resource "aws_iam_role" "node" {
assume_role_policy = data.aws_iam_policy_document.assume_role_ec2.json
name = var.name
}
data "aws_iam_policy_document" "assume_role_ec2" {
statement {
actions = ["sts:AssumeRole"]
effect = "Allow"
principals {
identifiers = ["ec2.amazonaws.com"]
type = "Service"
}
}
}
When I first tried to apply this I got that error, so I removed all of it and tried again without the instance profile like-so:
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
instance_initiated_shutdown_behavior = "terminate"
name = var.name
user_data = base64encode("node_userdata.tpl")
# vpc_security_group_ids = var.security_group_ids
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
monitoring {
enabled = true
}
}
Got the same error both times. Here's my node group resource block as well
resource "aws_eks_node_group" "nodes_eks" {
cluster_name = aws_eks_cluster.eks.name
node_group_name = "eks-node-group"
node_role_arn = aws_iam_role.eks_nodes.arn
subnet_ids = module.vpc.private_subnets
# remote_access {
# ec2_ssh_key = aws_key_pair.bastion_auth.id
# }
scaling_config {
desired_size = 3
max_size = 6
min_size = 3
}
ami_type = "CUSTOM"
capacity_type = "ON_DEMAND"
force_update_version = false
# instance_types = [var.instance_type]
labels = {
role = "nodes-pool-1"
}
launch_template {
id = aws_launch_template.node.id
version = aws_launch_template.node.default_version
}
# version = var.k8s_version
depends_on = [
aws_iam_role_policy_attachment.amazon_eks_worker_node_policy,
aws_iam_role_policy_attachment.amazon_eks_cni_policy,
aws_iam_role_policy_attachment.amazon_ec2_container_registry_read_only,
]
}
In this case, there are multiple points to take care of starting with [1]:
An object representing a node group launch template specification. The launch template cannot include SubnetId, IamInstanceProfile, RequestSpotInstances, HibernationOptions, or TerminateInstances, or the node group deployment or update will fail.
As per the document [2], you cannot specify any of the:
Instance profile - the node IAM role will be used
Subnets - the subnet_ids will be used and they are defined also in the node configuration
Shutdown behavior - EKS controls the instance lifecycle
Note that in the table it says prohibited which means it cannot ever be used. Additionally, in [2], you can find this as well:
Some of the settings in a launch template are similar to the settings used for managed node configuration. When deploying or updating a node group with a launch template, some settings must be specified in either the node group configuration or the launch template. Don't specify both places. If a setting exists where it shouldn't, then operations such as creating or updating a node group fail.
So you were pretty close when you removed the iam_instance_profile, but you still have to get rid of the instance_initiated_shutdown_behavior argument:
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
name = var.name
user_data = base64encode("node_userdata.tpl")
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
monitoring {
enabled = true
}
}
I strongly suggest reading through the second document as it contains a lot of useful information about what to do when using a custom AMI.
[1] https://docs.aws.amazon.com/eks/latest/APIReference/API_LaunchTemplateSpecification.html
[2] https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-basics

Terraform launch template

I'm creating aws eks cluster via terraform, but when i create cluster nodegroup with launch template i got 2 launch templates - one is with name and settings that i specify and second is with random name but with setting that i specify. Only difference in this 2 launch templat is IAM instance profile that got 2nd group (that creates automatically).
If i trying to specify IAM instance profile in my group it gives error that i cannot use it here
Is i'm doing something wrong or it's normal that it's creates 2 launch template ?
# eks node launch template
resource "aws_launch_template" "this" {
name = "${var.eks_cluster_name}-node-launch-template"
instance_type = var.instance_types[0]
image_id = var.node_ami
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = 80
volume_type = "gp3"
throughput = "125"
encrypted = false
iops = 3000
}
}
lifecycle {
create_before_destroy = true
}
network_interfaces {
security_groups = [data.aws_eks_cluster.this.vpc_config[0].cluster_security_group_id]
}
user_data = base64encode(templatefile("${path.module}/userdata.tpl", merge(local.userdata_vars, local.cluster_data)))
tags = {
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
tag_specifications {
resource_type = "instance"
tags = {
Name = "${var.eks_cluster_name}-node"
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
}
tag_specifications {
resource_type = "volume"
tags = {
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
}
}
# eks nodes
resource "aws_eks_node_group" "this" {
cluster_name = aws_eks_cluster.this.name
node_group_name = var.node_group_name
node_role_arn = aws_iam_role.eksNodesGroup.arn
subnet_ids = data.aws_subnet_ids.private.ids
scaling_config {
desired_size = 1
max_size = 10
min_size = 1
}
update_config {
max_unavailable = 1
}
launch_template {
version = aws_launch_template.this.latest_version
id = aws_launch_template.this.id
}
lifecycle {
create_before_destroy = true
ignore_changes = [
scaling_config[0].desired_size
]
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
aws_launch_template.this
]
}
Expecting that terraform will create one launch template
Change the attribute name by name_prefix
Use:
name_prefix = "${var.eks_cluster_name}-node-launch-template"
Instead of:
name = "${var.eks_cluster_name}-node-launch-template"
The best choice to create a Unique name for a launch template using prefix (in your case ${var.eks_cluster_name}) is the name_prefix attribute.
Read more here

Terraform multiple instances but by separate execution

I am trying to create AWS instances with load balancer, security group and three instances ---> GROUP 1
I can do this by declaring appropriate resources.
Now I want to create multiples of such instances which are independent of previous instances ---> GROUP 2
I want this because of security of the GROUPS that one group's information should not overlap with other's.
I tried to look up a lot, but couldn't find an approach.
Below is an example of instance:
resource "aws_instance" "node" {
ami = data.aws_ami.ubuntu.id
subnet_id = aws_subnet.development-private-1a.id
key_name = aws_key_pair.nodes.key_name
instance_type = var.instance_type
vpc_security_group_ids = [aws_security_group.dev-ec2-sg.id]
tags = {
Name = "${var.app_name}"
#Environment = "production"
}
root_block_device {
volume_type = "gp2"
volume_size = 8
delete_on_termination = true
}
user_data = file("install_apache.sh")
}
resource "aws_lb_target_group_attachment" "node" {
target_group_arn = aws_lb_target_group.dev.arn
target_id = aws_instance.node.id
port = 80
}
I want to add multiple of these instances with different security groups and load balancers and all other stuff. but I dont want to add additional copies of the same in the terraform file. I want that those instances are independent of this one but then the problem I am facing is that terraform manipulates this instance only.
Based on the comments, you could consider organization of your instance code and its dependents (e.g. target group attachment) as terraform (TF) modules. Also since you are wish to create multiple instance of the same type, you could consider using aws_autoscaling_group which would allow you to not only easily create multiple instance but also easily to manage them.
Subsequently, you could define a module as followed. Below is only partial example. I also do not use aws_autoscaling_group, but create multiple instance using count:
./module/ec2/main.tf
variable "subnet_id" {}
variable "app_name" {}
variable "key_pair" {}
variable "security_group_id" {}
variable "target_group_arn" {}
variable "instance_count" {
default = 1
}
data "aws_ami" "ubuntu" {
# ...
}
resource "aws_instance" "node" {
count = var.instance_count
ami = data.aws_ami.ubuntu.id
subnet_id = var.subnet_id
key_name = var.key_pair
instance_type = var.instance_type
vpc_security_group_ids = [var.security_group_id]
tags = {
Name = "${var.app_name}"
#Environment = "production"
}
root_block_device {
volume_type = "gp2"
volume_size = 8
delete_on_termination = true
}
user_data = file("install_apache.sh")
}
resource "aws_lb_target_group_attachment" "node" {
count = var.instance_count
target_group_arn = var.target_group_arn
target_id = aws_instance.node[count.index].id
port = 80
}
# some outputs skipped
Having such module, in your parent file/module you would create GROUP 1 and 2 instance as follows (again, just partial example):
./main.tf
# resoruces such as LB, SGs, subnets, etc.
module "group1" {
source = "./module/ec2/"
instance_count = 3
security_group_id = <security-group-id1>
target_group_arn = aws_lb_target_group.dev.arn
# other parameters
}
module "group2" {
source = "./module/ec2/"
instance_count = 3
security_group_id = <security-group-id2>
target_group_arn = aws_lb_target_group.dev.arn
# other parameters
}

How can I get current aws spot price through Terraform

I would like to create auto scaling group in Terraform and get the spot price through a data and create the launch template with the updated spot price, for example:
resource "aws_launch_template" "launch_cfg_spot" {
count = length(var.pricing)
name_prefix = "launch_cfg_spot_${count.index}"
instance_type = var.pricing[count.index].InstanceType
image_id = "ami-0ff8a91507f77f867"
instance_market_options {
market_type = "spot"
spot_options {
max_price = var.pricing[count.index].price
}
}
network_interfaces{
subnet_id = var.subnets[var.pricing[count.index].az]
}
}
I have implemented it with an external script for now using the describe_spot_price_history command in boto3 but I know for sure that there is a way to get the price through Terraform
Since terraform aws provider 3.1.0 got released, there is a data source called "aws_ec2_spot_price". I use construction based on desired subnet (spot prices are different from one availability zone to another), but you certainly can adjust it up to your needs. I also add two more percent to prevent an instance from termination due to price volatility:
data "aws_subnet" "selected" {
id = var.subnet_id
}
data "aws_ec2_spot_price" "current" {
instance_type = var.instance_type
availability_zone = data.aws_subnet.selected.availability_zone
filter {
name = "product-description"
values = ["Linux/UNIX"]
}
}
locals {
spot_price = data.aws_ec2_spot_price.current.spot_price + data.aws_ec2_spot_price.current.spot_price * 0.02
common_tags = {
ManagedBy = "terraform"
}
}

How can I tag instances launched from autoscaling using Terraform?

I'm using Terraform to setup a ECS cluster. This is my launch configuration:
resource "aws_launch_configuration" "launch_config" {
name_prefix = "my_project_lc"
image_id = "ami-ff15039b"
instance_type = "t2.medium"
user_data = "${data.template_file.user_data.rendered}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "autoscaling_group" {
name = "my_project_asg"
max_size = 2
min_size = 1
launch_configuration = "${aws_launch_configuration.launch_config.name}"
vpc_zone_identifier = ["${aws_subnet.public.id}"]
}
It's working fine but EC2 instance has no name (tag "Name"). How can I change my config in order to give instance a meaningful name? A prefix or something...
Thanks
Yes, it is possible. See the documentation for aws_autoscaling_group resource. Example code:
resource "aws_autoscaling_group" "bar" {
name = "my_project_asg"
max_size = 2
min_size = 1
launch_configuration = "${aws_launch_configuration.launch_config.name}"
vpc_zone_identifier = ["${aws_subnet.public.id}"]
tag {
key = "Name"
value = "something-here"
propagate_at_launch = true
}
tag {
key = "lorem"
value = "ipsum"
propagate_at_launch = false
}
}
Alternatively, you can use terraform-aws-autoscaling module which implements different types of tags.