eksctl create cluster failed - amazon-web-services

I'm trying to create a eks cluster following the workshop tutorial
https://www.eksworkshop.com/030_eksctl/launcheks/
yet eksctl create cluster failed, here's the log showed in the terminal
ec2-user:~/environment $ eksctl create cluster -f eksworkshop.yaml
2022-04-26 05:08:30 [!] SSM is now enabled by default; `ssh.enableSSM` is deprecated and will be removed in a future release
2022-04-26 05:08:30 [ℹ] eksctl version 0.94.0
2022-04-26 05:08:30 [ℹ] using region us-west-1
2022-04-26 05:08:30 [ℹ] subnets for us-west-1b - public:192.168.0.0/19 private:192.168.96.0/19
2022-04-26 05:08:30 [ℹ] subnets for us-west-1c - public:192.168.32.0/19 private:192.168.128.0/19
2022-04-26 05:08:30 [ℹ] subnets for - public:192.168.64.0/19 private:192.168.160.0/19
2022-04-26 05:08:30 [ℹ] nodegroup "nodegroup" will use "" [AmazonLinux2/1.19]
2022-04-26 05:08:30 [ℹ] using Kubernetes version 1.19
2022-04-26 05:08:30 [ℹ] creating EKS cluster "eksworkshop-eksctl" in "us-west-1" region with managed nodes
2022-04-26 05:08:30 [ℹ] 1 nodegroup (nodegroup) was included (based on the include/exclude rules)
2022-04-26 05:08:30 [ℹ] will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2022-04-26 05:08:30 [ℹ] will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2022-04-26 05:08:30 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-1 --cluster=eksworkshop-eksctl'
2022-04-26 05:08:30 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "eksworkshop-eksctl" in "us-west-1"
2022-04-26 05:08:30 [ℹ] CloudWatch logging will not be enabled for cluster "eksworkshop-eksctl" in "us-west-1"
2022-04-26 05:08:30 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-1 --cluster=eksworkshop-eksctl'
2022-04-26 05:08:30 [ℹ]
2 sequential tasks: { create cluster control plane "eksworkshop-eksctl",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "nodegroup",
}
}
2022-04-26 05:08:30 [ℹ] building cluster stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:08:30 [ℹ] deploying stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:09:00 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:09:30 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:10:30 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:11:30 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:12:30 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:13:30 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:14:30 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:15:30 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:16:31 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:17:31 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:18:31 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:19:31 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:20:31 [ℹ] waiting for CloudFormation stack "eksctl-eksworkshop-eksctl-cluster"
2022-04-26 05:20:31 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2022-04-26 05:20:31 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-west-1 --name=eksworkshop-eksctl'
2022-04-26 05:20:31 [✖] getting stack "eksctl-eksworkshop-eksctl-cluster" outputs: couldn't import subnet subnet-06ea5af280253e579: subnet ID "subnet-0068d4ea9652c80bc" is not the same as "subnet-06ea5af280253e579"
Error: failed to create cluster "eksworkshop-eksctl"
What is the potential issue for this? The IAM role is valid
ec2-user:~/environment $ aws sts get-caller-identity --query Arn | grep eksworkshop-admin -q && echo "IAM role valid" || echo "IAM role NOT valid"
IAM role valid
my yaml file:
enter image description here

Related

Why did eksctl create iamserviceaccount failed? waiter state transitioned to Failure

I am running command
eksctl create iamserviceaccount --name efs-csi-controller-sa --namespace kube-system --cluster mmpana --attach-policy-arn arn:aws:iam::12345678:policy/EKS_EFS_CSI_Driver_Policy --approve --override-existing-serviceaccounts --region us-east-1
I got error
2023-02-07 13:36:36 [ℹ] 1 error(s) occurred and IAM Role stacks haven't been created properly, you may wish to check CloudFormation console
2023-02-07 13:36:36 [✖] waiter state transitioned to Failure
Then I checked Cloudformation stacks
and
I upgraded eksctl yesterday
eksctl version
0.128.0
I am looking now at my policy
How to fix this?

Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 months ago.
The community reviewed whether to reopen this question 7 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
**Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"
**
2022-09-16 16:35:00 [ℹ] eksctl version 0.111.0
2022-09-16 16:35:00 [ℹ] using region ap-south-1
2022-09-16 16:35:00 [ℹ] skipping ap-south-1c from selection because it doesn't support the following instance type(s): t2.micro
2022-09-16 16:35:00 [ℹ] setting availability zones to [ap-south-1a ap-south-1b]
2022-09-16 16:35:00 [ℹ] subnets for ap-south-1a - public:192.168.0.0/19 private:192.168.64.0/19
2022-09-16 16:35:00 [ℹ] subnets for ap-south-1b - public:192.168.32.0/19 private:192.168.96.0/19
2022-09-16 16:35:00 [ℹ] nodegroup "ng-1" will use "" [AmazonLinux2/1.23]
2022-09-16 16:35:00 [ℹ] using Kubernetes version 1.23
2022-09-16 16:35:00 [ℹ] creating EKS cluster "basic-cluster" in "ap-south-1" region with managed nodes
2022-09-16 16:35:00 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-09-16 16:35:00 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-south-1 --cluster=basic-cluster'
2022-09-16 16:35:00 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "basic-cluster" in "ap-south-1"
2022-09-16 16:35:00 [ℹ] CloudWatch logging will not be enabled for cluster "basic-cluster" in "ap-south-1"
2022-09-16 16:35:00 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=ap-south-1 --cluster=basic-cluster'
2022-09-16 16:35:00 [ℹ]
2 sequential tasks: { create cluster control plane "basic-cluster",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-1",
}
}
2022-09-16 16:35:00 [ℹ] building cluster stack "eksctl-basic-cluster-cluster"
2022-09-16 16:35:00 [ℹ] deploying stack "eksctl-basic-cluster-cluster"
2022-09-16 16:35:30 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:36:01 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:37:01 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:38:01 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:39:01 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:40:01 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:41:02 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:42:02 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:43:02 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:44:02 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:45:02 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:46:03 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:48:05 [ℹ] building managed nodegroup stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:48:05 [ℹ] deploying stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:48:05 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:48:36 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:49:22 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:49:53 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:51:15 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:52:09 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:52:09 [ℹ] waiting for the control plane availability...
2022-09-16 16:52:09 [✔] saved kubeconfig as "/home/santhosh_puvaneswaran/.kube/config"
2022-09-16 16:52:09 [ℹ] no tasks
2022-09-16 16:52:09 [✔] all EKS cluster resources for "basic-cluster" have been created
2022-09-16 16:52:09 [ℹ] nodegroup "ng-1" has 3 node(s)
2022-09-16 16:52:09 [ℹ] node "ip-192-168-15-31.ap-south-1.compute.internal" is ready
2022-09-16 16:52:09 [ℹ] node "ip-192-168-35-216.ap-south-1.compute.internal" is ready
2022-09-16 16:52:09 [ℹ] node "ip-192-168-36-191.ap-south-1.compute.internal" is ready
2022-09-16 16:52:09 [ℹ] waiting for at least 3 node(s) to become ready in "ng-1"
2022-09-16 16:52:09 [ℹ] nodegroup "ng-1" has 3 node(s)
2022-09-16 16:52:09 [ℹ] node "ip-192-168-15-31.ap-south-1.compute.internal" is ready
2022-09-16 16:52:09 [ℹ] node "ip-192-168-35-216.ap-south-1.compute.internal" is ready
2022-09-16 16:52:09 [ℹ] node "ip-192-168-36-191.ap-south-1.compute.internal" is ready
*2022-09-16 16:52:10 [✖] unable to use kubectl with the EKS cluster (check 'kubectl version'): WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"*
2022-09-16 16:52:10 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries
2022-09-16 16:52:10 [✔] EKS cluster "basic-cluster" in "ap-south-1" region is ready
santhosh_puvaneswaran#it002072:
I don't why I am having this error again and again,
I can create a clusters and delete, But can't able to work on it..!
You need to update your AWS CLI to >2.7.25 or the latest (recommended), ensure your CLI is pointing to the right region, then try eksctl utils write-kubeconfig --cluster=<name>. Open the kubeconfig file and check client.authentication.k8s.io/v1alpha1 has changed to client.authentication.k8s.io/v1beta1.
For me, it worked with awscli v2
Steps:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o
"awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
For awscli version 2 run:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update

AWS EKS: user is not authorized to perform: iam:CreateRole on resource [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I want to create a Kubernetes cluster in AWS using the command:
eksctl create cluster \
--name claireudacitycapstoneproject \
--version 1.17 \
--region us-east-1 \
--nodegroup-name standard-workers \
--node-type t2.micro \
--nodes 2 \
--nodes-min 1 \
--nodes-max 3 \
--managed
This ends with errors that infroms that:
2021-10-22 21:25:46 [ℹ] eksctl version 0.70.0
2021-10-22 21:25:46 [ℹ] using region us-east-1
2021-10-22 21:25:48 [ℹ] setting availability zones to [us-east-1a us-east-1b]
2021-10-22 21:25:48 [ℹ] subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
2021-10-22 21:25:48 [ℹ] subnets for us-east-1b - public:192.168.32.0/19 private:192.168.96.0/19
2021-10-22 21:25:48 [ℹ] nodegroup "standard-workers" will use "" [AmazonLinux2/1.17]
2021-10-22 21:25:48 [ℹ] using Kubernetes version 1.17
2021-10-22 21:25:48 [ℹ] creating EKS cluster "claireudacitycapstoneproject" in "us-east-1" region with managed nodes
2021-10-22 21:25:48 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2021-10-22 21:25:48 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=claireudacitycapstoneproject'
2021-10-22 21:25:48 [ℹ] CloudWatch logging will not be enabled for cluster "claireudacitycapstoneproject" in "us-east-1"
2021-10-22 21:25:48 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=claireudacitycapstoneproject'
2021-10-22 21:25:48 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "claireudacitycapstoneproject" in "us-east-1"
2021-10-22 21:25:48 [ℹ]
2 sequential tasks: { create cluster control plane "claireudacitycapstoneproject",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "standard-workers",
}
}
2021-10-22 21:25:48 [ℹ] building cluster stack "eksctl-claireudacitycapstoneproject-cluster"
2021-10-22 21:25:51 [ℹ] deploying stack "eksctl-claireudacitycapstoneproject-cluster"
2021-10-22 21:26:21 [ℹ] waiting for CloudFormation stack "eksctl-claireudacitycapstoneproject-cluster"
2021-10-22 21:26:52 [ℹ] waiting for CloudFormation stack "eksctl-claireudacitycapstoneproject-cluster"
2021-10-22 21:26:54 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-claireudacitycapstoneproject-cluster"
2021-10-22 21:26:54 [ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure
2021-10-22 21:26:54 [!] AWS::EC2::EIP/NATIP: DELETE_IN_PROGRESS
2021-10-22 21:26:54 [!] AWS::EC2::VPC/VPC: DELETE_IN_PROGRESS
2021-10-22 21:26:54 [!] AWS::EC2::InternetGateway/InternetGateway: DELETE_IN_PROGRESS
2021-10-22 21:26:54 [✖] AWS::EC2::VPC/VPC: CREATE_FAILED – "Resource creation cancelled"
2021-10-22 21:26:54 [✖] AWS::EC2::InternetGateway/InternetGateway: CREATE_FAILED – "Resource creation cancelled"
2021-10-22 21:26:54 [✖] AWS::EC2::EIP/NATIP: CREATE_FAILED – "Resource creation cancelled"
2021-10-22 21:26:54 [✖] AWS::IAM::Role/ServiceRole: CREATE_FAILED – "API: iam:CreateRole User: arn:aws:iam::602502938985:user/CLI is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::602502938985:role/eksctl-claireudacitycapstoneproject-cl-ServiceRole-4CR9Z6NRNU49 with an explicit deny"
2021-10-22 21:26:54 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2021-10-22 21:26:54 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-east-1 --name=claireudacitycapstoneproject'
2021-10-22 21:26:54 [✖] ResourceNotReady: failed waiting for successful resource state
Error: failed to create cluster "claireudacitycapstoneproject"
Previously, I run the same command and receive the following errors:
Error: checking AWS STS access – cannot get role ARN for current session: RequestError: send request failed
What permission do I need to provide to the AWS user to execute it?
What permission do I need to provide to the AWS user to execute it?
You can check the minimum IAM requirement to run eksctl here.

Getting error while creating ekscluster with the same name

I have created ekscluster with a name called "prod". I worked on this "prod" cluster after that i have deleted it. I have deleted all its associated vpc, interfaces, security groups everything. But if i try to create the ekscluster with the same name "prod" am getting this below error. Can you please help me on this issue?
[centos#ip-172-31-23-128 ~]$ eksctl create cluster --name prod
--region us-east-2 [ℹ] eksctl version 0.13.0 [ℹ] using region us-east-2 [ℹ] setting availability zones to [us-east-2b us-east-2c us-east-2a] [ℹ] subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19 [ℹ] subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19 [ℹ] subnets for us-east-2a - public:192.168.64.0/19 private:192.168.160.0/19 [ℹ] nodegroup "ng-1902b9c1" will use "ami-080fbb09ee2d4d3fa" [AmazonLinux2/1.14] [ℹ] using Kubernetes version 1.14 [ℹ] creating EKS cluster "prod" in "us-east-2" region with un-managed nodes [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks
--region=us-east-2 --cluster=prod' [ℹ] CloudWatch logging will not be enabled for cluster "prod" in "us-east-2" [ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-2
--cluster=prod' [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "prod" in "us-east-2" [ℹ] 2 sequential tasks: { create cluster control plane "prod", create nodegroup "ng-1902b9c1" } [ℹ] building cluster stack "eksctl-prod-cluster" [ℹ] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-east-2
--name=prod' [✖] creating CloudFormation stack "eksctl-prod-cluster": AlreadyExistsException: Stack [eksctl-prod-cluster] already exists status code: 400, request id: 49258141-e03a-42af-ba8a-3fef9176063e Error: failed to create cluster "prod"
There are two things to consider here.
The delete command does not wait for all the resources to actually be gone. You should add the --wait flag in order to let it finish. It usually it takes around 10-15 mins.
If that is still not enough you should make sure that you delete the CloudFormation object. It would look something like this (adjust the naming):
#delete cluster:
-delete cloudformation stack
aws cloudformation list-stacks --query StackSummaries[].StackName
aws cloudformation delete-stack --stack-name worker-node-stack
aws eks delete-cluster --name EKStestcluster
Please let me know if that helped.
I was struggling with this error while Running EKS via Terraform - I'll share my solution hopefully it will save other some valuable time.
I tried to follow the references below but same result.
Also I tried to setup different timeouts for delete and create - still didn't help.
Finally I was able to resolve this when I changed the create_before_destroy value inside the lifecycle block to false:
lifecycle {
create_before_destroy = false
}
(*) Notice - pods are still running on cluster during the update.
References:
Non-default node_group name breaks node group version upgrade
Changing tags causes node groups to be replaced

ecs-cli up does not create ec2 instance

I'm trying to launch an ECS cluster using the cli, but get stuck on EC2 instances not being created.
I've configured my ecs credentials, adding all the missing permissions extracted from the CloudFormation errors - at least I don't see any additional errors now. I've configured a simple cluster configuration.
~/.ecs/config
clusters:
mycluster:
cluster: mycluster
region: eu-north-1
default_launch_type: EC2
And this is the cli command I run;
ecs-cli up --keypair myKeyPair --capability-iam \
--size 1 --instance-type t2.micro \
--cluster-config mycluster --cluster mycluster \
--launch-type EC2 --force --verbose
I get no error messages, the cluster is created, but I see no instances connected to it, and especially no instances on EC2.
This is the output from the cli command;
INFO[0000] Using recommended Amazon Linux 2 AMI with ECS Agent 1.29.1 and Docker version 18.06.1-ce
INFO[0000] Created cluster cluster=mycluster region=eu-north-1
INFO[0000] Waiting for your CloudFormation stack resources to be deleted...
INFO[0000] Cloudformation stack status stackStatus=DELETE_IN_PROGRESS
DEBU[0030] Cloudformation stack status stackStatus=DELETE_IN_PROGRESS
DEBU[0061] Cloudformation create stack call succeeded stackId=0xc00043ab11
INFO[0061] Waiting for your cluster resources to be created...
DEBU[0061] parsing event eventStatus=CREATE_IN_PROGRESS resource="arn:aws:cloudformation:eu-north-1:999987631111:stack/amazon-ecs-cli-setup-mycluster/11111111-aba2-11e9-ac3c-0e40cf291592"
INFO[0061] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0091] parsing event eventStatus=CREATE_IN_PROGRESS resource=subnet-0cc4a3aa110555d42
DEBU[0091] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0121] parsing event eventStatus=CREATE_IN_PROGRESS resource=rtbassoc-05c185a5aa11ca22e
INFO[0121] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0151] parsing event eventStatus=CREATE_COMPLETE resource=rtbassoc-05c185a5aa11ca22e
DEBU[0151] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0181] parsing event eventStatus=CREATE_COMPLETE resource=rtbassoc-05c185a5aa11ca22e
INFO[0181] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0212] parsing event eventStatus=CREATE_COMPLETE resource=amazon-ecs-cli-setup-mycluster-EcsInstanceProfile-1KS4Q3W9HAAAA
DEBU[0212] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
DEBU[0242] parsing event eventStatus=CREATE_COMPLETE resource="arn:aws:cloudformation:eu-north-1:999987631111:stack/amazon-ecs-cli-setup-mycluster/11111111-aba2-11e9-ac3c-0e40cf291592"
VPC created: vpc-033f7a6fedfee256d
Security Group created: sg-0e4461f781bad6681
Subnet created: subnet-0cc4a3aa110555d42
Subnet created: subnet-0a4797072dc9641d2
Cluster creation succeeded.
Running describe-clusters after a couple of hours;
aws ecs describe-clusters --clusters mycluster --region eu-north-1
gives the following output;
{
"clusters": [
{
"status": "ACTIVE",
"statistics": [],
"tags": [],
"clusterName": "mycluster",
"registeredContainerInstancesCount": 0,
"pendingTasksCount": 0,
"runningTasksCount": 0,
"activeServicesCount": 0,
"clusterArn": "arn:aws:ecs:eu-north-1:999987631111:cluster/mycluster"
}
],
"failures": []
}
Does anyone know what I might be missing? I've not hit any limits, since I've only got 1 other running instance (on a different region).