ecs-cli up does not create ec2 instance - amazon-web-services

I'm trying to launch an ECS cluster using the cli, but get stuck on EC2 instances not being created.
I've configured my ecs credentials, adding all the missing permissions extracted from the CloudFormation errors - at least I don't see any additional errors now. I've configured a simple cluster configuration.
~/.ecs/config
clusters:
mycluster:
cluster: mycluster
region: eu-north-1
default_launch_type: EC2
And this is the cli command I run;
ecs-cli up --keypair myKeyPair --capability-iam \
--size 1 --instance-type t2.micro \
--cluster-config mycluster --cluster mycluster \
--launch-type EC2 --force --verbose
I get no error messages, the cluster is created, but I see no instances connected to it, and especially no instances on EC2.
This is the output from the cli command;
INFO[0000] Using recommended Amazon Linux 2 AMI with ECS Agent 1.29.1 and Docker version 18.06.1-ce
INFO[0000] Created cluster cluster=mycluster region=eu-north-1
INFO[0000] Waiting for your CloudFormation stack resources to be deleted...
INFO[0000] Cloudformation stack status stackStatus=DELETE_IN_PROGRESS
DEBU[0030] Cloudformation stack status stackStatus=DELETE_IN_PROGRESS
DEBU[0061] Cloudformation create stack call succeeded stackId=0xc00043ab11
INFO[0061] Waiting for your cluster resources to be created...
DEBU[0061] parsing event eventStatus=CREATE_IN_PROGRESS resource="arn:aws:cloudformation:eu-north-1:999987631111:stack/amazon-ecs-cli-setup-mycluster/11111111-aba2-11e9-ac3c-0e40cf291592"
INFO[0061] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0091] parsing event eventStatus=CREATE_IN_PROGRESS resource=subnet-0cc4a3aa110555d42
DEBU[0091] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0121] parsing event eventStatus=CREATE_IN_PROGRESS resource=rtbassoc-05c185a5aa11ca22e
INFO[0121] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0151] parsing event eventStatus=CREATE_COMPLETE resource=rtbassoc-05c185a5aa11ca22e
DEBU[0151] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0181] parsing event eventStatus=CREATE_COMPLETE resource=rtbassoc-05c185a5aa11ca22e
INFO[0181] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0212] parsing event eventStatus=CREATE_COMPLETE resource=amazon-ecs-cli-setup-mycluster-EcsInstanceProfile-1KS4Q3W9HAAAA
DEBU[0212] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0242] parsing event eventStatus=CREATE_COMPLETE resource="arn:aws:cloudformation:eu-north-1:999987631111:stack/amazon-ecs-cli-setup-mycluster/11111111-aba2-11e9-ac3c-0e40cf291592"
VPC created: vpc-033f7a6fedfee256d
Security Group created: sg-0e4461f781bad6681
Subnet created: subnet-0cc4a3aa110555d42
Subnet created: subnet-0a4797072dc9641d2
Cluster creation succeeded.
Running describe-clusters after a couple of hours;
aws ecs describe-clusters --clusters mycluster --region eu-north-1
gives the following output;
{
"clusters": [
{
"status": "ACTIVE",
"statistics": [],
"tags": [],
"clusterName": "mycluster",
"registeredContainerInstancesCount": 0,
"pendingTasksCount": 0,
"runningTasksCount": 0,
"activeServicesCount": 0,
"clusterArn": "arn:aws:ecs:eu-north-1:999987631111:cluster/mycluster"
}
],
"failures": []
}
Does anyone know what I might be missing? I've not hit any limits, since I've only got 1 other running instance (on a different region).

Related

Why did eksctl create iamserviceaccount failed? waiter state transitioned to Failure

I am running command
eksctl create iamserviceaccount --name efs-csi-controller-sa --namespace kube-system --cluster mmpana --attach-policy-arn arn:aws:iam::12345678:policy/EKS_EFS_CSI_Driver_Policy --approve --override-existing-serviceaccounts --region us-east-1
I got error
2023-02-07 13:36:36 [ℹ] 1 error(s) occurred and IAM Role stacks haven't been created properly, you may wish to check CloudFormation console
2023-02-07 13:36:36 [✖] waiter state transitioned to Failure
Then I checked Cloudformation stacks
and
I upgraded eksctl yesterday
eksctl version
0.128.0
I am looking now at my policy
How to fix this?

Unable to create a fargate profile for the AWS EKS cluster

I have an AWS EKS cluster named xyz-cicd in the regios Ohio(us-east-2) which I had created using the eksctl command like below:-
eksctl create cluster --name xyz-cicd --region us-east-2 --fargate
It took some time to create a cluster with a default profile however I want to create a new profile for the same cluster so I ran the following command which is giving me an error:-
vinod827#Vinods-MacBook-Pro cicd % eksctl create fargateprofile \
--cluster xyz-cicd \
--name cicd \
--namespace cicd
Error: fetching cluster status to determine operability: unable to describe cluster control plane: ResourceNotFoundException: No cluster found for name: xyz-cicd.
{
RespMetadata: {
StatusCode: 404,
RequestID: "c12bd05c-3eb6-40bf-a972-f1cba139ea9a"
},
Message_: "No cluster found for name: xyz-cicd."
}
vinod827#Vinods-MacBook-Pro cicd %
Please note there is no issue with the cluster name or region. The cluster does exists in this same region but not sure why the eksctl command is returning the error stating no cluster found with the same name. I can schedule a pod on the default profile if that were the case. Please advise, thanks
Your second command is missing the region parameter and therefore probably using a different region. That is why it not finding your cluster.

ecs-cli refers to old cluster after changing default profile; doesn't show EC2 instances

I've been using AWS's ECS CLI to spin clusters of EC2 instances up and down for various tasks. The problem I'm running into is that it seems to be referring to old information that I don't know how to change.
e.g., I just created a cluster, my-second-cluster successfully, and can see it in the AWS console:
$ ecs-cli up --keypair "my-keypair" --capability-iam --size 4 --instance-type t2.micro --port 22 --cluster-config my-second-cluster --ecs-profile a-second-profile
INFO[0001] Using recommended Amazon Linux 2 AMI with ECS Agent 1.45.0 and Docker version 19.03.6-ce
INFO[0001] Created cluster cluster=my-second-cluster region=us-east-1
INFO[0002] Waiting for your cluster resources to be created...
INFO[0002] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0063] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0124] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
VPC created: vpc-123abc
Security Group created: sg-123abc
Subnet created: subnet-123abc
Subnet created: subnet-123def
Cluster creation succeeded.
...but eci-cli ps returns an error referring to an old cluster:
$ ecs-cli ps
FATA[0000] Error executing 'ps': Cluster 'my-first-cluster' is not active. Ensure that it exists
Specifying the cluster explicitly (ecs-cli ps --cluster my-second-cluster --region us-east-1) returns nothing, even though I see the 4 EC2 instances when I log into the AWS console.
Supporting details:
Before creating this second cluster, I created a second profile and set it to the default. I also set the new cluster to be the default.
$ ecs-cli configure profile --access-key <MY_ACCESS_KEY> --secret-key <MY_SECRET_KEY> --profile-name a-second-profile
$ ecs-cli configure profile default --profile-name a-second-profile
$ ecs-cli configure --cluster my-second-cluster --region us-east-1
INFO[0000] Saved ECS CLI cluster configuration default.
It's unclear to me where these ECS profile and cluster configs are stored (I'd expect to see them as files in ~/.aws, but no), or how to manipulate them beyond the cli commands that don't give great feedback. Any ideas on what I'm missing?
The ECS CLI stores it's credentials at ~/.ecs/credentials.
When you create the initial profile it's name is default and is used by default. When you set a-second-profile to default, it sets the metadata to use a-second-profile by default but you still have a profile named default that points to the original creds.
My guess is that to see the first cluster you need to now specify a profile name since you changed the default. If you didn't give your initial profile a name then it will be default.
ecs-cli ps --ecs-profile default
If you deleted your cluster configuration you may need to add the cluster again and associate to the right profile:
ecs-cli configure --cluster cluster_name --default-launch-type launch_type --region region_name --config-name configuration_name
I hope that makes sense. Hopefully looking at how your commands update ~/.ecs/credentials be helpful.
Some resources:
ECS CLI Configurations

Getting error while creating ekscluster with the same name

I have created ekscluster with a name called "prod". I worked on this "prod" cluster after that i have deleted it. I have deleted all its associated vpc, interfaces, security groups everything. But if i try to create the ekscluster with the same name "prod" am getting this below error. Can you please help me on this issue?
[centos#ip-172-31-23-128 ~]$ eksctl create cluster --name prod
--region us-east-2 [ℹ] eksctl version 0.13.0 [ℹ] using region us-east-2 [ℹ] setting availability zones to [us-east-2b us-east-2c us-east-2a] [ℹ] subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19 [ℹ] subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19 [ℹ] subnets for us-east-2a - public:192.168.64.0/19 private:192.168.160.0/19 [ℹ] nodegroup "ng-1902b9c1" will use "ami-080fbb09ee2d4d3fa" [AmazonLinux2/1.14] [ℹ] using Kubernetes version 1.14 [ℹ] creating EKS cluster "prod" in "us-east-2" region with un-managed nodes [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks
--region=us-east-2 --cluster=prod' [ℹ] CloudWatch logging will not be enabled for cluster "prod" in "us-east-2" [ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-2
--cluster=prod' [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "prod" in "us-east-2" [ℹ] 2 sequential tasks: { create cluster control plane "prod", create nodegroup "ng-1902b9c1" } [ℹ] building cluster stack "eksctl-prod-cluster" [ℹ] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-east-2
--name=prod' [✖] creating CloudFormation stack "eksctl-prod-cluster": AlreadyExistsException: Stack [eksctl-prod-cluster] already exists status code: 400, request id: 49258141-e03a-42af-ba8a-3fef9176063e Error: failed to create cluster "prod"
There are two things to consider here.
The delete command does not wait for all the resources to actually be gone. You should add the --wait flag in order to let it finish. It usually it takes around 10-15 mins.
If that is still not enough you should make sure that you delete the CloudFormation object. It would look something like this (adjust the naming):
#delete cluster:
-delete cloudformation stack
aws cloudformation list-stacks --query StackSummaries[].StackName
aws cloudformation delete-stack --stack-name worker-node-stack
aws eks delete-cluster --name EKStestcluster
Please let me know if that helped.
I was struggling with this error while Running EKS via Terraform - I'll share my solution hopefully it will save other some valuable time.
I tried to follow the references below but same result.
Also I tried to setup different timeouts for delete and create - still didn't help.
Finally I was able to resolve this when I changed the create_before_destroy value inside the lifecycle block to false:
lifecycle {
create_before_destroy = false
}
(*) Notice - pods are still running on cluster during the update.
References:
Non-default node_group name breaks node group version upgrade
Changing tags causes node groups to be replaced

Stop and Start Elastic Beanstalk Services

I wanted to know if there is an option to STOP Amazon Elastic Beanstalk as an atomic unit as I can do with EC2 servers instead of going through each service (e.g. load balancer, EC2..) and STOP (and START) them independently?
The EB command line interface has an eb stop command. Here is a little bit about what the command actually does:
The eb stop command deletes the AWS resources that are running your application (such as the ELB and the EC2 instances). It however leaves behind all of the application versions and configuration settings that you had deployed, so you can quickly get started again. Eb stop is ideal when you are developing and testing your application and don’t need the AWS resources running over night. You can get going again by simply running eb start.
EDIT:
As stated in the below comment, this is no longer a command in the new eb-cli.
If you have a load-balanced environment you can try the following trick
$ aws autoscaling update-auto-scaling-group \
--auto-scaling-group-name my-auto-scaling-group \
--min-size 0 --max-size 0 --desired-capacity 0
It will remove all instances from the environment but won't delete the environment itself. Unfortunately you still will pay for elastic load balancer. But usually EC2 is the most "heavy" part.
Does it work for 0?
yes, it does
$ aws autoscaling describe-auto-scaling-groups --region us-east-1 \
--auto-scaling-group-name ASG_NAME \
--query "AutoScalingGroups[].{DesiredCapacity:DesiredCapacity,MinSize:MinSize,MaxSize:MaxSize}"
[
{
"MinSize": 2,
"MaxSize": 2,
"DesiredCapacity": 2
}
]
$ aws autoscaling update-auto-scaling-group --region us-east-1 \
--auto-scaling-group-name ASG_NAME \
--min-size 0 --max-size 0 --desired-capacity 0
$ aws autoscaling describe-auto-scaling-groups --region us-east-1 \
--auto-scaling-group-name ASG_NAME \
--query "AutoScalingGroups[].{DesiredCapacity:DesiredCapacity,MinSize:MinSize,MaxSize:MaxSize}"
[
{
"MinSize": 0,
"MaxSize": 0,
"DesiredCapacity": 0
}
]
And then you can check environment status
$ eb status -v
Environment details for: test
Application name: TEST
Region: us-east-1
Deployed Version: app-170925_181953
Environment ID: e-1234567890
Platform: arn:aws:elasticbeanstalk:us-east-1::platform/Multi-container Docker running on 64bit Amazon Linux/2.7.4
Tier: WebServer-Standard
CNAME: test.us-east-1.elasticbeanstalk.com
Updated: 2017-09-25 15:23:22.980000+00:00
Status: Ready
Health: Grey
Running instances: 0
In the beanstalk webconsole you will see the following message
INFO Environment health has transitioned from Ok to No Data.
There are no instances. Auto Scaling group desired capacity is set to zero.
eb stop is deprecated. I also had the same problem and the only solution I could come up with was to backup the environment and then restore it.
Here's a blog post in which I'm explaining it:
http://pminkov.github.io/blog/how-to-shut-down-and-restore-an-elastic-beanstalk-environment.html