Terraform launch template - amazon-web-services

I'm creating aws eks cluster via terraform, but when i create cluster nodegroup with launch template i got 2 launch templates - one is with name and settings that i specify and second is with random name but with setting that i specify. Only difference in this 2 launch templat is IAM instance profile that got 2nd group (that creates automatically).
If i trying to specify IAM instance profile in my group it gives error that i cannot use it here
Is i'm doing something wrong or it's normal that it's creates 2 launch template ?
# eks node launch template
resource "aws_launch_template" "this" {
name = "${var.eks_cluster_name}-node-launch-template"
instance_type = var.instance_types[0]
image_id = var.node_ami
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = 80
volume_type = "gp3"
throughput = "125"
encrypted = false
iops = 3000
}
}
lifecycle {
create_before_destroy = true
}
network_interfaces {
security_groups = [data.aws_eks_cluster.this.vpc_config[0].cluster_security_group_id]
}
user_data = base64encode(templatefile("${path.module}/userdata.tpl", merge(local.userdata_vars, local.cluster_data)))
tags = {
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
tag_specifications {
resource_type = "instance"
tags = {
Name = "${var.eks_cluster_name}-node"
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
}
tag_specifications {
resource_type = "volume"
tags = {
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
}
}
# eks nodes
resource "aws_eks_node_group" "this" {
cluster_name = aws_eks_cluster.this.name
node_group_name = var.node_group_name
node_role_arn = aws_iam_role.eksNodesGroup.arn
subnet_ids = data.aws_subnet_ids.private.ids
scaling_config {
desired_size = 1
max_size = 10
min_size = 1
}
update_config {
max_unavailable = 1
}
launch_template {
version = aws_launch_template.this.latest_version
id = aws_launch_template.this.id
}
lifecycle {
create_before_destroy = true
ignore_changes = [
scaling_config[0].desired_size
]
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
aws_launch_template.this
]
}
Expecting that terraform will create one launch template

Change the attribute name by name_prefix
Use:
name_prefix = "${var.eks_cluster_name}-node-launch-template"
Instead of:
name = "${var.eks_cluster_name}-node-launch-template"
The best choice to create a Unique name for a launch template using prefix (in your case ${var.eks_cluster_name}) is the name_prefix attribute.
Read more here

Related

ec2 instances created by a launch template in auto-scaling group are not being registered with the target group

These are my alb configurations in case they are necessary. Skip down to see the meat of the problem.
resource "aws_autoscaling_attachment" "asg_attachment" {
autoscaling_group_name = aws_autoscaling_group.web.id
lb_target_group_arn = aws_lb_target_group.main.arn
}
resource "aws_lb" "main" {
name = "test-${var.env}-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.elb.id]
subnets = [aws_subnet.main1.id, aws_subnet.main2.id]
tags = {
Name = "${var.project_name}-${var.env}-alb"
Project = var.project_name
Environment = var.env
ManagedBy = "terraform"
}
}
resource "aws_lb_target_group" "main" {
name = "${var.project_name}-${var.env}-alb-tg"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
deregistration_delay = 30
health_check {
interval = 10
matcher = "200-299"
path = "/"
}
}
resource "aws_lb_listener" "main" {
load_balancer_arn = aws_lb.main.arn
protocol = "HTTPS"
port = "443"
ssl_policy = "ELBSecurityPolicy-TLS-1-2-2017-01"
certificate_arn = aws_acm_certificate.main.arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.main.arn
}
}
============
I have an auto-scaling group created through terraform:
resource "aws_autoscaling_group" "web" {
vpc_zone_identifier = [aws_subnet.main1.id]
launch_template {
id = aws_launch_template.web.id
version = "$Latest"
}
min_size = 1
max_size = 10
lifecycle {
create_before_destroy = true
}
}
the launch template looks like this:
resource "aws_launch_template" "web" {
name_prefix = "${var.project_name}-${var.env}-autoscale-web-"
image_id = var.web_ami
instance_type = "t3.small"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.webserver.id]
}
if i use a launch configuration instead of a launch template, it works:
resource "aws_launch_configuration" "web" {
image_id = var.web_ami
instance_type = "t3.small"
key_name = var.key_name
security_groups = [aws_security_group.webserver.id]
root_block_device {
volume_size = 8 # GB
volume_type = "gp3"
}
}
when using the launch configuration, the one line in the autoscaling_group is added:
launch_configuration = aws_launch_configuration.web.name
and the launch_template section is removed.
aws_launch_configuration is deprecated, so I'd like to use the launch_template.
Everything is working fine; the instance spins up and I can connect to it and it passes the health check. The problem is that the EC2 instance doesn't automatically register with the target group. When I manually register it with the target group, then everything works fine.
How can I get the EC2 instances that spin up with a launch template to automatically get added to the target group?
It turns out aws_autoscaling_attachment is also deprecated, and I needed to add:
target_group_arns = [aws_lb_target_group.main.arn]
to my aws_autoscaling_group.

aws_eks_node_group Max pods disabled

I am trying to create EKS cluster with maxpods limit =110
Creating node group using aws_eks_node_group
resource "aws_eks_node_group" "eks-node-group" {
cluster_name = var.cluster-name
node_group_name = var.node-group-name
node_role_arn = var.eks-nodes-role.arn
subnet_ids = var.subnet-ids
version = var.cluster-version
release_version = nonsensitive(data.aws_ssm_parameter.eks_ami_release_version.value)
capacity_type = "SPOT"
lifecycle {
create_before_destroy = true
}
scaling_config {
desired_size = var.scale-config.desired-size
max_size = var.scale-config.max-size
min_size = var.scale-config.min-size
}
instance_types = var.scale-config.instance-types
update_config {
max_unavailable = var.update-config.max-unavailable
}
depends_on = [var.depends-on]
launch_template {
id = aws_launch_template.node-group-launch-template.id
version = aws_launch_template.node-group-launch-template.latest_version
}
}
resource "aws_launch_template" "node-group-launch-template" {
name_prefix = "eks-node-group"
image_id = var.template-image-id
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = var.ebs_size
}
}
ebs_optimized = true
user_data = base64encode(data.template_file.test.rendered)
# user_data = filebase64("${path.module}/example.sh")
}
data "template_file" "test" {
template = <<EOF
/etc/eks/bootstrap.sh ${var.cluster-name} --use-max-pods false --kubelet-extra-args '--max-pods=110'
EOF
}
Launch template is created just to provide bootstrap arguments. I have tried supplying the same in aws_eks_cluster resource as well
module "eks__user_data" {
source = "terraform-aws-modules/eks/aws//modules/_user_data"
version = "18.30.3"
cluster_name = aws_eks_cluster.metashape-eks.name
bootstrap_extra_args = "--use-max-pods false --kubelet-extra-args '--max-pods=110'"
}
but unable to achieve desired effect till now.
Trying to follow
https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html. CNI driver is enabled 1.12 and all other configurations seems correct too.

Launch template mtc should not specify an instance profile. The noderole in your request will be used to construct an instance profile

My launch template specifies an iam instance profile and my node group has a groupe role arn. Based on this error, I removed the iam_instance_role argument from my template resource block and it still gave me the same error
Launch template mtc should not specify an instance profile. The noderole in your request will be used to construct an instance profile."
Here's my launch template resource blocks with my instance profile included
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
instance_initiated_shutdown_behavior = "terminate"
name = var.name
user_data = base64encode("node_userdata.tpl")
# vpc_security_group_ids = var.security_group_ids
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
iam_instance_profile {
name = aws_iam_instance_profile.node.name
}
monitoring {
enabled = true
}
}
resource "aws_iam_instance_profile" "node" {
name_prefix = var.name
role = aws_iam_role.node.id
}
resource "aws_iam_role" "node" {
assume_role_policy = data.aws_iam_policy_document.assume_role_ec2.json
name = var.name
}
data "aws_iam_policy_document" "assume_role_ec2" {
statement {
actions = ["sts:AssumeRole"]
effect = "Allow"
principals {
identifiers = ["ec2.amazonaws.com"]
type = "Service"
}
}
}
When I first tried to apply this I got that error, so I removed all of it and tried again without the instance profile like-so:
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
instance_initiated_shutdown_behavior = "terminate"
name = var.name
user_data = base64encode("node_userdata.tpl")
# vpc_security_group_ids = var.security_group_ids
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
monitoring {
enabled = true
}
}
Got the same error both times. Here's my node group resource block as well
resource "aws_eks_node_group" "nodes_eks" {
cluster_name = aws_eks_cluster.eks.name
node_group_name = "eks-node-group"
node_role_arn = aws_iam_role.eks_nodes.arn
subnet_ids = module.vpc.private_subnets
# remote_access {
# ec2_ssh_key = aws_key_pair.bastion_auth.id
# }
scaling_config {
desired_size = 3
max_size = 6
min_size = 3
}
ami_type = "CUSTOM"
capacity_type = "ON_DEMAND"
force_update_version = false
# instance_types = [var.instance_type]
labels = {
role = "nodes-pool-1"
}
launch_template {
id = aws_launch_template.node.id
version = aws_launch_template.node.default_version
}
# version = var.k8s_version
depends_on = [
aws_iam_role_policy_attachment.amazon_eks_worker_node_policy,
aws_iam_role_policy_attachment.amazon_eks_cni_policy,
aws_iam_role_policy_attachment.amazon_ec2_container_registry_read_only,
]
}
In this case, there are multiple points to take care of starting with [1]:
An object representing a node group launch template specification. The launch template cannot include SubnetId, IamInstanceProfile, RequestSpotInstances, HibernationOptions, or TerminateInstances, or the node group deployment or update will fail.
As per the document [2], you cannot specify any of the:
Instance profile - the node IAM role will be used
Subnets - the subnet_ids will be used and they are defined also in the node configuration
Shutdown behavior - EKS controls the instance lifecycle
Note that in the table it says prohibited which means it cannot ever be used. Additionally, in [2], you can find this as well:
Some of the settings in a launch template are similar to the settings used for managed node configuration. When deploying or updating a node group with a launch template, some settings must be specified in either the node group configuration or the launch template. Don't specify both places. If a setting exists where it shouldn't, then operations such as creating or updating a node group fail.
So you were pretty close when you removed the iam_instance_profile, but you still have to get rid of the instance_initiated_shutdown_behavior argument:
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
name = var.name
user_data = base64encode("node_userdata.tpl")
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
monitoring {
enabled = true
}
}
I strongly suggest reading through the second document as it contains a lot of useful information about what to do when using a custom AMI.
[1] https://docs.aws.amazon.com/eks/latest/APIReference/API_LaunchTemplateSpecification.html
[2] https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-basics

You cannot specify an AMI Type other than CUSTOM, when specifying an image id in your launch template

I'm stuck in a loop here- I'm trying to create a launch template for my eks nodes and my launch template looked like this:
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
instance_initiated_shutdown_behavior = "terminate"
name = var.name
user_data = base64encode("node_userdata.tpl")
# vpc_security_group_ids = var.security_group_ids
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
iam_instance_profile {
name = aws_iam_instance_profile.node.name
}
monitoring {
enabled = true
}
}
Here's my node resource block as well:
resource "aws_eks_node_group" "nodes_eks" {
cluster_name = aws_eks_cluster.eks.name
node_group_name = "eks-node-group"
node_role_arn = aws_iam_role.eks_nodes.arn
subnet_ids = module.vpc.private_subnets
# remote_access {
# ec2_ssh_key = aws_key_pair.bastion_auth.id
# }
scaling_config {
desired_size = 3
max_size = 6
min_size = 3
}
ami_type = "AL2_x86_64"
capacity_type = "ON_DEMAND"
force_update_version = false
instance_types = [var.instance_type]
labels = {
role = "nodes-pool-1"
}
launch_template {
id = aws_launch_template.node.id
version = "$Default"
}
# version = var.k8s_version
depends_on = [
aws_iam_role_policy_attachment.amazon_eks_worker_node_policy,
aws_iam_role_policy_attachment.amazon_eks_cni_policy,
aws_iam_role_policy_attachment.amazon_ec2_container_registry_read_only,
]
}
My image ID for my launch template is this amazon linux 2 image "ami-098e42ae54c764c35". When I tried to run that, it gave me this error
You cannot specify an AMI Type other than CUSTOM, when specifying an image id in your launch template
So I changed it from var.image_id (The Amazon Linux 2 image) to "CUSTOM" and it's returning this error now:
InvalidAMIID.Malformed: The image ID 'CUSTOM' is not valid. The expected format is ami-xxxxxxxx or ami-xxxxxxxxxxxxxxxxx.
I don't know what the solution is, because when I passed in the ami via a variable it said the value had to be "CUSTOM", so I made it that and now it's saying it has to be the typical AMI id format.
You cannot have both the ami_type = "AL2_x86_64" and launch_configuration. The message is a bit misleading, but if you look in [1], you will see where CUSTOM has to be used:
If the node group was deployed using a launch template with a custom AMI, then this is CUSTOM.
So, you have to change the following line:
ami_type = "CUSTOM"
Also, the Terraform docs [2] have something to say about fetching the version of the launch template. The final outlook of your launch_configuration block should be:
launch_template {
id = aws_launch_template.node.id
version = aws_launch_template.node.latest_version
}
[1] https://docs.aws.amazon.com/eks/latest/APIReference/API_Nodegroup.html#AmazonEKS-Type-Nodegroup-amiType
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group#version

How can I tag instances launched from autoscaling using Terraform?

I'm using Terraform to setup a ECS cluster. This is my launch configuration:
resource "aws_launch_configuration" "launch_config" {
name_prefix = "my_project_lc"
image_id = "ami-ff15039b"
instance_type = "t2.medium"
user_data = "${data.template_file.user_data.rendered}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "autoscaling_group" {
name = "my_project_asg"
max_size = 2
min_size = 1
launch_configuration = "${aws_launch_configuration.launch_config.name}"
vpc_zone_identifier = ["${aws_subnet.public.id}"]
}
It's working fine but EC2 instance has no name (tag "Name"). How can I change my config in order to give instance a meaningful name? A prefix or something...
Thanks
Yes, it is possible. See the documentation for aws_autoscaling_group resource. Example code:
resource "aws_autoscaling_group" "bar" {
name = "my_project_asg"
max_size = 2
min_size = 1
launch_configuration = "${aws_launch_configuration.launch_config.name}"
vpc_zone_identifier = ["${aws_subnet.public.id}"]
tag {
key = "Name"
value = "something-here"
propagate_at_launch = true
}
tag {
key = "lorem"
value = "ipsum"
propagate_at_launch = false
}
}
Alternatively, you can use terraform-aws-autoscaling module which implements different types of tags.