Launch template mtc should not specify an instance profile. The noderole in your request will be used to construct an instance profile - amazon-web-services

My launch template specifies an iam instance profile and my node group has a groupe role arn. Based on this error, I removed the iam_instance_role argument from my template resource block and it still gave me the same error
Launch template mtc should not specify an instance profile. The noderole in your request will be used to construct an instance profile."
Here's my launch template resource blocks with my instance profile included
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
instance_initiated_shutdown_behavior = "terminate"
name = var.name
user_data = base64encode("node_userdata.tpl")
# vpc_security_group_ids = var.security_group_ids
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
iam_instance_profile {
name = aws_iam_instance_profile.node.name
}
monitoring {
enabled = true
}
}
resource "aws_iam_instance_profile" "node" {
name_prefix = var.name
role = aws_iam_role.node.id
}
resource "aws_iam_role" "node" {
assume_role_policy = data.aws_iam_policy_document.assume_role_ec2.json
name = var.name
}
data "aws_iam_policy_document" "assume_role_ec2" {
statement {
actions = ["sts:AssumeRole"]
effect = "Allow"
principals {
identifiers = ["ec2.amazonaws.com"]
type = "Service"
}
}
}
When I first tried to apply this I got that error, so I removed all of it and tried again without the instance profile like-so:
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
instance_initiated_shutdown_behavior = "terminate"
name = var.name
user_data = base64encode("node_userdata.tpl")
# vpc_security_group_ids = var.security_group_ids
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
monitoring {
enabled = true
}
}
Got the same error both times. Here's my node group resource block as well
resource "aws_eks_node_group" "nodes_eks" {
cluster_name = aws_eks_cluster.eks.name
node_group_name = "eks-node-group"
node_role_arn = aws_iam_role.eks_nodes.arn
subnet_ids = module.vpc.private_subnets
# remote_access {
# ec2_ssh_key = aws_key_pair.bastion_auth.id
# }
scaling_config {
desired_size = 3
max_size = 6
min_size = 3
}
ami_type = "CUSTOM"
capacity_type = "ON_DEMAND"
force_update_version = false
# instance_types = [var.instance_type]
labels = {
role = "nodes-pool-1"
}
launch_template {
id = aws_launch_template.node.id
version = aws_launch_template.node.default_version
}
# version = var.k8s_version
depends_on = [
aws_iam_role_policy_attachment.amazon_eks_worker_node_policy,
aws_iam_role_policy_attachment.amazon_eks_cni_policy,
aws_iam_role_policy_attachment.amazon_ec2_container_registry_read_only,
]
}

In this case, there are multiple points to take care of starting with [1]:
An object representing a node group launch template specification. The launch template cannot include SubnetId, IamInstanceProfile, RequestSpotInstances, HibernationOptions, or TerminateInstances, or the node group deployment or update will fail.
As per the document [2], you cannot specify any of the:
Instance profile - the node IAM role will be used
Subnets - the subnet_ids will be used and they are defined also in the node configuration
Shutdown behavior - EKS controls the instance lifecycle
Note that in the table it says prohibited which means it cannot ever be used. Additionally, in [2], you can find this as well:
Some of the settings in a launch template are similar to the settings used for managed node configuration. When deploying or updating a node group with a launch template, some settings must be specified in either the node group configuration or the launch template. Don't specify both places. If a setting exists where it shouldn't, then operations such as creating or updating a node group fail.
So you were pretty close when you removed the iam_instance_profile, but you still have to get rid of the instance_initiated_shutdown_behavior argument:
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
name = var.name
user_data = base64encode("node_userdata.tpl")
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
monitoring {
enabled = true
}
}
I strongly suggest reading through the second document as it contains a lot of useful information about what to do when using a custom AMI.
[1] https://docs.aws.amazon.com/eks/latest/APIReference/API_LaunchTemplateSpecification.html
[2] https://docs.aws.amazon.com/eks/latest/userguide/launch-templates.html#launch-template-basics

Related

You cannot specify an AMI Type other than CUSTOM, when specifying an image id in your launch template

I'm stuck in a loop here- I'm trying to create a launch template for my eks nodes and my launch template looked like this:
resource "aws_launch_template" "node" {
image_id = var.image_id
instance_type = var.instance_type
key_name = var.key_name
instance_initiated_shutdown_behavior = "terminate"
name = var.name
user_data = base64encode("node_userdata.tpl")
# vpc_security_group_ids = var.security_group_ids
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 20
}
}
iam_instance_profile {
name = aws_iam_instance_profile.node.name
}
monitoring {
enabled = true
}
}
Here's my node resource block as well:
resource "aws_eks_node_group" "nodes_eks" {
cluster_name = aws_eks_cluster.eks.name
node_group_name = "eks-node-group"
node_role_arn = aws_iam_role.eks_nodes.arn
subnet_ids = module.vpc.private_subnets
# remote_access {
# ec2_ssh_key = aws_key_pair.bastion_auth.id
# }
scaling_config {
desired_size = 3
max_size = 6
min_size = 3
}
ami_type = "AL2_x86_64"
capacity_type = "ON_DEMAND"
force_update_version = false
instance_types = [var.instance_type]
labels = {
role = "nodes-pool-1"
}
launch_template {
id = aws_launch_template.node.id
version = "$Default"
}
# version = var.k8s_version
depends_on = [
aws_iam_role_policy_attachment.amazon_eks_worker_node_policy,
aws_iam_role_policy_attachment.amazon_eks_cni_policy,
aws_iam_role_policy_attachment.amazon_ec2_container_registry_read_only,
]
}
My image ID for my launch template is this amazon linux 2 image "ami-098e42ae54c764c35". When I tried to run that, it gave me this error
You cannot specify an AMI Type other than CUSTOM, when specifying an image id in your launch template
So I changed it from var.image_id (The Amazon Linux 2 image) to "CUSTOM" and it's returning this error now:
InvalidAMIID.Malformed: The image ID 'CUSTOM' is not valid. The expected format is ami-xxxxxxxx or ami-xxxxxxxxxxxxxxxxx.
I don't know what the solution is, because when I passed in the ami via a variable it said the value had to be "CUSTOM", so I made it that and now it's saying it has to be the typical AMI id format.
You cannot have both the ami_type = "AL2_x86_64" and launch_configuration. The message is a bit misleading, but if you look in [1], you will see where CUSTOM has to be used:
If the node group was deployed using a launch template with a custom AMI, then this is CUSTOM.
So, you have to change the following line:
ami_type = "CUSTOM"
Also, the Terraform docs [2] have something to say about fetching the version of the launch template. The final outlook of your launch_configuration block should be:
launch_template {
id = aws_launch_template.node.id
version = aws_launch_template.node.latest_version
}
[1] https://docs.aws.amazon.com/eks/latest/APIReference/API_Nodegroup.html#AmazonEKS-Type-Nodegroup-amiType
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group#version

Terraform launch template

I'm creating aws eks cluster via terraform, but when i create cluster nodegroup with launch template i got 2 launch templates - one is with name and settings that i specify and second is with random name but with setting that i specify. Only difference in this 2 launch templat is IAM instance profile that got 2nd group (that creates automatically).
If i trying to specify IAM instance profile in my group it gives error that i cannot use it here
Is i'm doing something wrong or it's normal that it's creates 2 launch template ?
# eks node launch template
resource "aws_launch_template" "this" {
name = "${var.eks_cluster_name}-node-launch-template"
instance_type = var.instance_types[0]
image_id = var.node_ami
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = 80
volume_type = "gp3"
throughput = "125"
encrypted = false
iops = 3000
}
}
lifecycle {
create_before_destroy = true
}
network_interfaces {
security_groups = [data.aws_eks_cluster.this.vpc_config[0].cluster_security_group_id]
}
user_data = base64encode(templatefile("${path.module}/userdata.tpl", merge(local.userdata_vars, local.cluster_data)))
tags = {
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
tag_specifications {
resource_type = "instance"
tags = {
Name = "${var.eks_cluster_name}-node"
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
}
tag_specifications {
resource_type = "volume"
tags = {
"eks:cluster-name" = var.eks_cluster_name
"eks:nodegroup-name" = var.node_group_name
}
}
}
# eks nodes
resource "aws_eks_node_group" "this" {
cluster_name = aws_eks_cluster.this.name
node_group_name = var.node_group_name
node_role_arn = aws_iam_role.eksNodesGroup.arn
subnet_ids = data.aws_subnet_ids.private.ids
scaling_config {
desired_size = 1
max_size = 10
min_size = 1
}
update_config {
max_unavailable = 1
}
launch_template {
version = aws_launch_template.this.latest_version
id = aws_launch_template.this.id
}
lifecycle {
create_before_destroy = true
ignore_changes = [
scaling_config[0].desired_size
]
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
aws_launch_template.this
]
}
Expecting that terraform will create one launch template
Change the attribute name by name_prefix
Use:
name_prefix = "${var.eks_cluster_name}-node-launch-template"
Instead of:
name = "${var.eks_cluster_name}-node-launch-template"
The best choice to create a Unique name for a launch template using prefix (in your case ${var.eks_cluster_name}) is the name_prefix attribute.
Read more here

Terraform multiple instances but by separate execution

I am trying to create AWS instances with load balancer, security group and three instances ---> GROUP 1
I can do this by declaring appropriate resources.
Now I want to create multiples of such instances which are independent of previous instances ---> GROUP 2
I want this because of security of the GROUPS that one group's information should not overlap with other's.
I tried to look up a lot, but couldn't find an approach.
Below is an example of instance:
resource "aws_instance" "node" {
ami = data.aws_ami.ubuntu.id
subnet_id = aws_subnet.development-private-1a.id
key_name = aws_key_pair.nodes.key_name
instance_type = var.instance_type
vpc_security_group_ids = [aws_security_group.dev-ec2-sg.id]
tags = {
Name = "${var.app_name}"
#Environment = "production"
}
root_block_device {
volume_type = "gp2"
volume_size = 8
delete_on_termination = true
}
user_data = file("install_apache.sh")
}
resource "aws_lb_target_group_attachment" "node" {
target_group_arn = aws_lb_target_group.dev.arn
target_id = aws_instance.node.id
port = 80
}
I want to add multiple of these instances with different security groups and load balancers and all other stuff. but I dont want to add additional copies of the same in the terraform file. I want that those instances are independent of this one but then the problem I am facing is that terraform manipulates this instance only.
Based on the comments, you could consider organization of your instance code and its dependents (e.g. target group attachment) as terraform (TF) modules. Also since you are wish to create multiple instance of the same type, you could consider using aws_autoscaling_group which would allow you to not only easily create multiple instance but also easily to manage them.
Subsequently, you could define a module as followed. Below is only partial example. I also do not use aws_autoscaling_group, but create multiple instance using count:
./module/ec2/main.tf
variable "subnet_id" {}
variable "app_name" {}
variable "key_pair" {}
variable "security_group_id" {}
variable "target_group_arn" {}
variable "instance_count" {
default = 1
}
data "aws_ami" "ubuntu" {
# ...
}
resource "aws_instance" "node" {
count = var.instance_count
ami = data.aws_ami.ubuntu.id
subnet_id = var.subnet_id
key_name = var.key_pair
instance_type = var.instance_type
vpc_security_group_ids = [var.security_group_id]
tags = {
Name = "${var.app_name}"
#Environment = "production"
}
root_block_device {
volume_type = "gp2"
volume_size = 8
delete_on_termination = true
}
user_data = file("install_apache.sh")
}
resource "aws_lb_target_group_attachment" "node" {
count = var.instance_count
target_group_arn = var.target_group_arn
target_id = aws_instance.node[count.index].id
port = 80
}
# some outputs skipped
Having such module, in your parent file/module you would create GROUP 1 and 2 instance as follows (again, just partial example):
./main.tf
# resoruces such as LB, SGs, subnets, etc.
module "group1" {
source = "./module/ec2/"
instance_count = 3
security_group_id = <security-group-id1>
target_group_arn = aws_lb_target_group.dev.arn
# other parameters
}
module "group2" {
source = "./module/ec2/"
instance_count = 3
security_group_id = <security-group-id2>
target_group_arn = aws_lb_target_group.dev.arn
# other parameters
}

How can I tag instances launched from autoscaling using Terraform?

I'm using Terraform to setup a ECS cluster. This is my launch configuration:
resource "aws_launch_configuration" "launch_config" {
name_prefix = "my_project_lc"
image_id = "ami-ff15039b"
instance_type = "t2.medium"
user_data = "${data.template_file.user_data.rendered}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "autoscaling_group" {
name = "my_project_asg"
max_size = 2
min_size = 1
launch_configuration = "${aws_launch_configuration.launch_config.name}"
vpc_zone_identifier = ["${aws_subnet.public.id}"]
}
It's working fine but EC2 instance has no name (tag "Name"). How can I change my config in order to give instance a meaningful name? A prefix or something...
Thanks
Yes, it is possible. See the documentation for aws_autoscaling_group resource. Example code:
resource "aws_autoscaling_group" "bar" {
name = "my_project_asg"
max_size = 2
min_size = 1
launch_configuration = "${aws_launch_configuration.launch_config.name}"
vpc_zone_identifier = ["${aws_subnet.public.id}"]
tag {
key = "Name"
value = "something-here"
propagate_at_launch = true
}
tag {
key = "lorem"
value = "ipsum"
propagate_at_launch = false
}
}
Alternatively, you can use terraform-aws-autoscaling module which implements different types of tags.

How to correctly use Count and pick multiple az subnets in Terraform

I am trying to implement a module where i am trying to spin a number of instance in already created subnets (by terraform) , but i am not sure how to actually use count in modules and also how to pick values from s3 bucket datasource to spin instance in multi-az , here is what my resource in module dir looks like
resource "aws_instance" "ec2-instances" {
count = "${var.count_num }"
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "${var.machine_type}"
key_name = "${var.key_name}"
#vpc_security_group_ids = ["${aws_security_group.jumpbox-sec-group.id}"]
vpc_security_group_ids = ["${var.sec-group}"]
disable_api_termination = "${var.is_production ? true : false}"
subnet_id = "${element(var.es_stg_subnets, count.index)}" <--- This won't work , i need to use data-source as s3
tags {
#Name = "${var.master_name}-${count.index+1}"
Name = "${var.instance-tag}-${count.index+1}"
Type = "${var.instance-type-tag}"
}
root_block_device {
volume_size = "${var.instance-vol-size}"
volume_type = "gp2"
}
}
And here is the actual module :
module "grafana-stg" {
source = "../../modules/services/gen-ec2"
#ami_id = "${data.aws_ami.ubuntu.id}"
instance_type = "${var.grafana_machine_type}"
key_name = "jumpbox"
vpc_security_group_ids = ["${aws_security_group.grafana-sec-group.id}"]
#subnets = "${data.terraform_remote_state.s3_bucket_state.subnet-public-prod-1a}"
subnet_id = ??????????????????
disable_api_termination = "${var.is_production ? true : false}"
}
I would look at retrieving your subnets utilising a data source.
Utilising Data Sources
Terraform has the concept of data sources. You can pull information from AWS that you require for resources. In your gen-ec2.tf file -
// In order to get subnets, you need the VPC they belong to.
// Note you can filter on a variety of different tags.
data "aws_vpc" "selected" {
tags {
Name = "NameOfVPC"
}
}
// This will then retrieve all subnet ids based on filter
data "aws_subnet_ids" "private" {
vpc_id = "${data.aws_vpc.selected.id}"
tags {
Tier = "private*"
}
}
resource "aws_instance" "ec2-instances" {
count = "${length(data.aws_subnet_ids.private.ids)}"
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "${var.machine_type}"
key_name = "${var.key_name}"
vpc_security_group_ids = ["${var.sec-group}"]
disable_api_termination = "${var.is_production ? true : false}"
subnet_id = "${element(data.aws_subnet_ids.private.*.ids, count.index)}"
tags {
Name = "${var.instance-tag}-${count.index+1}"
Type = "${var.instance-type-tag}"
}
root_block_device {
volume_size = "${var.instance-vol-size}"
volume_type = "gp2"
}
}
Your module now looks like so -
module "grafana-stg" {
source = "../../modules/services/gen-ec2"
#ami_id = "${data.aws_ami.ubuntu.id}"
instance_type = "${var.grafana_machine_type}"
key_name = "jumpbox"
vpc_security_group_ids = ["${aws_security_group.grafana-sec-group.id}"]
disable_api_termination = "${var.is_production ? true : false}"
}
For me as I am using Terraform v0.12.5, the bellow snippet worked fine
data "aws_subnet_ids" "public_subnet_list" {
vpc_id = "${var.vpc_id}"
tags = {
Tier = "Public"
}
}
resource "aws_instance" "example" {
count = 2
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
subnet_id = tolist(data.aws_subnet_ids.public_subnet_list.ids)[count.index]
}