unable to get nodegroup info using eskctl - amazon-web-services

Total noob and have a runaway EKS cluster adding up $$ on AWS.
I'm having a tough time scaling down my cluster ad not sure what to do. I'm following the recommendations here: How to stop AWS EKS Worker Instances reference below
If I run:
"eksctl get cluster", I get the following:
NAME REGION EKSCTL CREATED
my-cluster us-west-2 True
unique-outfit-1636757727 us-west-2 True
I then try the next line "eksctl get nodegroup --cluster my-cluster" and get:
2021-11-15 15:31:14 [ℹ] eksctl version 0.73.0
2021-11-15 15:31:14 [ℹ] using region us-west-2
Error: No nodegroups found
I'm desperate to try and scale down the cluster, but stuck in the above command.
Seems everything installed and is running as intended, but the management part is failing! Thanks in advance! What am I doing wrong?
Reference --
eksctl get cluster
eksctl get nodegroup --cluster CLUSTERNAME
eksctl scale nodegroup --cluster CLUSTERNAME --name NODEGROUPNAME --nodes NEWSIZE
To completely scale down the nodes to zero use this (max=0 threw errors):
eksctl scale nodegroup --cluster CLUSTERNAME --name NODEGROUPNAME --nodes 0 --nodes-max 1 --nodes-min 0

You don't have managed node group therefore eksctl does not return any node group result. The same applies to aws eks cli.
...scaling down my cluster...
You can logon to the console, goto EC2->Auto Scaling Groups, locate the launch template and scale by updating the "Group details". Depends on how your cluster was created, you can look for the launch template tag kubernetes.io/cluster/<your cluster name> to find the correct template.

Related

AWS comparison between nodegroup and managed nodegroup

I use eksctl to create EKS cluster on AWS
After create a yaml configuration file define EKS cluster follow docs, when I run the command eksctl create cluster -f k8s-dev/k8s-dev.yaml to execute the create cluster action, the log show some lines below:
2021-12-15 16:23:55 [ℹ] will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s)
2021-12-15 16:23:55 [ℹ] will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
What is the different between nodegroup and managed nodegroup?
I have read from official docs from AWS about managed nodegroup but I'm still can not clearly which exactly reason to choose nodegroup or managed nodegroup?
What would you use when you need to create a EKS cluster?
eksctl only provide option for you to choose nodeGroups or managedNodeGroups docs: https://eksctl.io/usage/container-runtime/#managed-nodes but not describe the different. But I think the follow document will give you the information you need
It describe the different features between EKS managed node groups - Self managed nodes and AWS Fargate
https://docs.aws.amazon.com/eks/latest/userguide/eks-compute.html
Depend on which purpose you want to use to choose the match one with your purpose, and if I was you, I will choose managed nodegroup.

Unable to create a fargate profile for the AWS EKS cluster

I have an AWS EKS cluster named xyz-cicd in the regios Ohio(us-east-2) which I had created using the eksctl command like below:-
eksctl create cluster --name xyz-cicd --region us-east-2 --fargate
It took some time to create a cluster with a default profile however I want to create a new profile for the same cluster so I ran the following command which is giving me an error:-
vinod827#Vinods-MacBook-Pro cicd % eksctl create fargateprofile \
--cluster xyz-cicd \
--name cicd \
--namespace cicd
Error: fetching cluster status to determine operability: unable to describe cluster control plane: ResourceNotFoundException: No cluster found for name: xyz-cicd.
{
RespMetadata: {
StatusCode: 404,
RequestID: "c12bd05c-3eb6-40bf-a972-f1cba139ea9a"
},
Message_: "No cluster found for name: xyz-cicd."
}
vinod827#Vinods-MacBook-Pro cicd %
Please note there is no issue with the cluster name or region. The cluster does exists in this same region but not sure why the eksctl command is returning the error stating no cluster found with the same name. I can schedule a pod on the default profile if that were the case. Please advise, thanks
Your second command is missing the region parameter and therefore probably using a different region. That is why it not finding your cluster.

ecs-cli refers to old cluster after changing default profile; doesn't show EC2 instances

I've been using AWS's ECS CLI to spin clusters of EC2 instances up and down for various tasks. The problem I'm running into is that it seems to be referring to old information that I don't know how to change.
e.g., I just created a cluster, my-second-cluster successfully, and can see it in the AWS console:
$ ecs-cli up --keypair "my-keypair" --capability-iam --size 4 --instance-type t2.micro --port 22 --cluster-config my-second-cluster --ecs-profile a-second-profile
INFO[0001] Using recommended Amazon Linux 2 AMI with ECS Agent 1.45.0 and Docker version 19.03.6-ce
INFO[0001] Created cluster cluster=my-second-cluster region=us-east-1
INFO[0002] Waiting for your cluster resources to be created...
INFO[0002] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0063] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0124] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
VPC created: vpc-123abc
Security Group created: sg-123abc
Subnet created: subnet-123abc
Subnet created: subnet-123def
Cluster creation succeeded.
...but eci-cli ps returns an error referring to an old cluster:
$ ecs-cli ps
FATA[0000] Error executing 'ps': Cluster 'my-first-cluster' is not active. Ensure that it exists
Specifying the cluster explicitly (ecs-cli ps --cluster my-second-cluster --region us-east-1) returns nothing, even though I see the 4 EC2 instances when I log into the AWS console.
Supporting details:
Before creating this second cluster, I created a second profile and set it to the default. I also set the new cluster to be the default.
$ ecs-cli configure profile --access-key <MY_ACCESS_KEY> --secret-key <MY_SECRET_KEY> --profile-name a-second-profile
$ ecs-cli configure profile default --profile-name a-second-profile
$ ecs-cli configure --cluster my-second-cluster --region us-east-1
INFO[0000] Saved ECS CLI cluster configuration default.
It's unclear to me where these ECS profile and cluster configs are stored (I'd expect to see them as files in ~/.aws, but no), or how to manipulate them beyond the cli commands that don't give great feedback. Any ideas on what I'm missing?
The ECS CLI stores it's credentials at ~/.ecs/credentials.
When you create the initial profile it's name is default and is used by default. When you set a-second-profile to default, it sets the metadata to use a-second-profile by default but you still have a profile named default that points to the original creds.
My guess is that to see the first cluster you need to now specify a profile name since you changed the default. If you didn't give your initial profile a name then it will be default.
ecs-cli ps --ecs-profile default
If you deleted your cluster configuration you may need to add the cluster again and associate to the right profile:
ecs-cli configure --cluster cluster_name --default-launch-type launch_type --region region_name --config-name configuration_name
I hope that makes sense. Hopefully looking at how your commands update ~/.ecs/credentials be helpful.
Some resources:
ECS CLI Configurations

Getting error while creating ekscluster with the same name

I have created ekscluster with a name called "prod". I worked on this "prod" cluster after that i have deleted it. I have deleted all its associated vpc, interfaces, security groups everything. But if i try to create the ekscluster with the same name "prod" am getting this below error. Can you please help me on this issue?
[centos#ip-172-31-23-128 ~]$ eksctl create cluster --name prod
--region us-east-2 [ℹ] eksctl version 0.13.0 [ℹ] using region us-east-2 [ℹ] setting availability zones to [us-east-2b us-east-2c us-east-2a] [ℹ] subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19 [ℹ] subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19 [ℹ] subnets for us-east-2a - public:192.168.64.0/19 private:192.168.160.0/19 [ℹ] nodegroup "ng-1902b9c1" will use "ami-080fbb09ee2d4d3fa" [AmazonLinux2/1.14] [ℹ] using Kubernetes version 1.14 [ℹ] creating EKS cluster "prod" in "us-east-2" region with un-managed nodes [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks
--region=us-east-2 --cluster=prod' [ℹ] CloudWatch logging will not be enabled for cluster "prod" in "us-east-2" [ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-2
--cluster=prod' [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "prod" in "us-east-2" [ℹ] 2 sequential tasks: { create cluster control plane "prod", create nodegroup "ng-1902b9c1" } [ℹ] building cluster stack "eksctl-prod-cluster" [ℹ] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-east-2
--name=prod' [✖] creating CloudFormation stack "eksctl-prod-cluster": AlreadyExistsException: Stack [eksctl-prod-cluster] already exists status code: 400, request id: 49258141-e03a-42af-ba8a-3fef9176063e Error: failed to create cluster "prod"
There are two things to consider here.
The delete command does not wait for all the resources to actually be gone. You should add the --wait flag in order to let it finish. It usually it takes around 10-15 mins.
If that is still not enough you should make sure that you delete the CloudFormation object. It would look something like this (adjust the naming):
#delete cluster:
-delete cloudformation stack
aws cloudformation list-stacks --query StackSummaries[].StackName
aws cloudformation delete-stack --stack-name worker-node-stack
aws eks delete-cluster --name EKStestcluster
Please let me know if that helped.
I was struggling with this error while Running EKS via Terraform - I'll share my solution hopefully it will save other some valuable time.
I tried to follow the references below but same result.
Also I tried to setup different timeouts for delete and create - still didn't help.
Finally I was able to resolve this when I changed the create_before_destroy value inside the lifecycle block to false:
lifecycle {
create_before_destroy = false
}
(*) Notice - pods are still running on cluster during the update.
References:
Non-default node_group name breaks node group version upgrade
Changing tags causes node groups to be replaced

How to scale down/up containers in aws ecs cluster by command line, should I use aws cli or ecs-cli?

I'm running AWS ECS cluster with EC2 instances and I want a command to scale up the tasks to 1 running instance and then after some time when I do not need it I want to scale it down to 0. This should destroy the underlying EC2 instance to avoid charges. I'm not using Fargate as it is not in free tier.
what I'm currently using to scale up to one and start running it:
ecs-cli scale --capability-iam --size 1 --cluster myEC2clusterName --region us-east-1
aws ecs run-task --cluster myEC2clusterName --region us-east-1 --task-definition myTaskDefinitionName:1 --count 1
what I'm currently using to scale down:
ecs-cli scale --capability-iam --size 0 --cluster myEC2clusterName --region us-east-1
Is there an equivalent command only in aws cli without need to use ecs-cli to do the same?
Yes, you can call the UpdateService API or use the update-service command.
aws ecs update-service --cluster myEC2clusterName --region us-east-1 --service myServiceName --desired-count 0
Edit: I misunderstood the question.
You can call the SetDesiredCapacity API or use the set-desired-capacity command to adjust the size of your EC2 auto scaling group.
The full command is to scale up/down the cluster is
aws autoscaling set-desired-capacity --desired-capacity 2 \
--auto-scaling-group-name <your-group-name>
You can get the group name with this command:
aws autoscaling describe-auto-scaling-instances
where the name itself will be in AutoScalingGroupName field of elements in AutoScalingInstances JSON array.