I've created AWS EKS cluster, with managed node group(s), as well I've created an AWS ASG (Auto Scaling Group) and AWS Launch Template. But when I'm attaching an AWS Launch Template to managed EKS node groups, it is creating a duplicate of existing (created) AWS Launch Template
--
Those Launch Templates:
DEV/MANAGED/EKS-WORKERS-SM/LATEST/TEMPLATE/EU-CENTRAL-1X
and
eks-XXXXXXXX-XXXXXXXX-XXXXXXXX
are identical, and I don't understand why EKS is creating for duplicates
--
As well same stuff is happening with AWS ASG (Auto Scaling Group), is there any way to fix this problem?
Technologies which used:
Terraform
Launch Template resource
Auto Scaling Group resource
EKS resource
EKS node group resource
It seems, you're running both resources. When you want to manage the Nodes only by your self, you don't need to have EKS node group resource, instead you need to use Launch Template resource with Auto Scaling Group resource, and with proper tagging
Related
Our EKS cluster is terraform managed and was specified with EC2 Launch Template as terraform resource. Our aws_eks_node_group includes Launch template section as shown below.
resource "aws_eks_node_group" eks_node_group" {
.........................
..........................
launch_template {
id = aws_launch_template.eksnode_template.id
version = aws_launch_template.eksnode_template.default_version
name = aws_launch_template.eksnode_template.name
}
}
However, after a while, EKS self-deployed the new Launch Template and linked it to the relevant auto-scaling group.
Why this has happened at the first place and how to avoid it in the future?
How can we link the customer managed Launch Template to EKS Autoscaling Group via terraform again? I tried changing the name or version of the launch template, but it is still using one created by EKS (self-managed)
The Amazon EKS API creates this launch template either by copying one you provide or by creating one automatically with default values in your account.
We don't recommend that you modify auto-generated launch templates.
(source)
I have setup code pipeline for end to end automatic deployment of revision on EC2 instances using cloudformation template, the deployment group is of type blue/green for codedploy.
But I dont understand how to keep the code deployment group in sync with newly created auto scaling group (green).
Do I have to create new lambda invoke action in pipeline after successful deployment to update the newly created auto scaling group name.
Unfortunately, CloudFormation does not support Blue/Green deployments for EC2 platform:
For blue/green deployments, AWS CloudFormation supports deployments on Lambda compute platforms only.
Support for ECS is very new.
To create deployment group for blue/green for EC2 platform you would have to create a custom resource in CloudFormation .
The custom resource would be based on a lambda function, and in that lambda function you would use create_deployment_group to define blue/green details for your EC2 instances. As part of this process, you will have an option to choose how to deal with AutoScaling group, e.g.
"greenFleetProvisioningOption": {
"action": "COPY_AUTO_SCALING_GROUP"
}
For creation of custom resource, crhelper by AWS is very useful.
Hope this helps and hope Blue/Green for EC2 will be supported by CloudFormation soon.
How can I edit user data in the managed node group which is part of my EKS cluster?
I tried to create a new version for the Launch Template that EKS cluster has created but I get an error on "Health issues" of the cluster: Ec2LaunchTemplateVersionMismatch
I want to make the nodes in the managed node group to mount the EFS automatically, without doing it manually on each instance, because of the autoscaling.
I am working on EMR template with autoscaling.
While a static EMR setup with instance group works fine, I cannot attach
AWS::ApplicationAutoScaling::ScalableTarget
As a troubleshooting I've split my template into 2 separate ones. In first I am creating a normal EMR cluster (which is fine). And then in second I have a ScalableTarget definition which fails attach with error:
11:29:34 UTC+0100 CREATE_FAILED AWS::ApplicationAutoScaling::ScalableTarget AutoscalingTarget EMR instance group doesn't exist: Failed to find Cluster XXXXXXX
Funny thing is that this cluster DOES exist.
I also had a look at IAM roles but everything seems to be ok there...
Can anyone advice on that matter?
Did anyone for Autoscaling instancegroup to work via Cloudformation?
I have already tried and raised a request with AWS. This autoscaling feature is not yet available using CloudFormation. Now I am using CF for Custom EMR SecGrp creation and S3 etc and in output tab, I am adding Command line command(aws emr create-cluster...... ). After getting output querying the result to launch Cluster.
Actually, autoscaling can be enabled at the time of cluster launching by using --auto-scaling-role. If we use CF for EMR, autoscaling feature is not available because it launches cluster without "--auto-scaling-role".
I hope this can be useful...
I want to create an ASG with only 1 instance initially.
I want all the instances of this ASG to be behind an ELB.
So I set load_balancers = ["${aws_elb.Production-Web-ELB.name}"] in the resource "aws_autoscaling_group" "ProductionWeb-ScalingGroup" .
Now, when I write the code for the resource "aws_elb" "Production-Web-ELB", and I set instances = ["${aws_autoscaling_group.ProductionWeb-ScalingGroup.*.id}"], I get the error...
Error configuring: 1 error(s) occurred:
* Cycle: aws_autoscaling_group.ProductionWeb-ScalingGroup, aws_elb.Production-Web-ELB
I understand that this error means that the one resource references the other in a circle. To check it I comment out the load_balancers = ["${aws_elb.Production-Web-ELB.name}"] part and terraform plan without any error.
So my question is: Am I unable using Terraform to create an ASG with an attached ELB and every EC2 that will spawn inside it will be automatically behind the ELB ?
Is there something from the documentation that I missed?
Is there a workaround?
You don't need to explicitly define the instances that will be associated with the ELB in terraform's ELB definition. By using the load_balancers argument, you're associating the ELB with the AutoScaling group, and AutoScaling will know to attach any instances that are created to that ELB when the AutoScaling group launches that instance.
Terraform isn't directly managing the state of the instances in this case -- AWS AutoScaling is, so their state likewise don't need to be defined in terraform beyond defining a launch configuration and associating it to the AutoScaling group.
To tell terraform to launch the AutoScaling group with a single instance, set your min_size argument to 1 and let your scaling policies handle the desired capacity from there. You could alternatively set desired_capacity to 1, but be wary of managing that state in terraform because it will set the desired_capacity to 1 every time you apply your plan.