Unable to create a fargate profile for the AWS EKS cluster - amazon-web-services

I have an AWS EKS cluster named xyz-cicd in the regios Ohio(us-east-2) which I had created using the eksctl command like below:-
eksctl create cluster --name xyz-cicd --region us-east-2 --fargate
It took some time to create a cluster with a default profile however I want to create a new profile for the same cluster so I ran the following command which is giving me an error:-
vinod827#Vinods-MacBook-Pro cicd % eksctl create fargateprofile \
--cluster xyz-cicd \
--name cicd \
--namespace cicd
Error: fetching cluster status to determine operability: unable to describe cluster control plane: ResourceNotFoundException: No cluster found for name: xyz-cicd.
{
RespMetadata: {
StatusCode: 404,
RequestID: "c12bd05c-3eb6-40bf-a972-f1cba139ea9a"
},
Message_: "No cluster found for name: xyz-cicd."
}
vinod827#Vinods-MacBook-Pro cicd %
Please note there is no issue with the cluster name or region. The cluster does exists in this same region but not sure why the eksctl command is returning the error stating no cluster found with the same name. I can schedule a pod on the default profile if that were the case. Please advise, thanks

Your second command is missing the region parameter and therefore probably using a different region. That is why it not finding your cluster.

Related

How to switch credentials kubectl?

I use the clusters on different AWS accounts and every time I want to connect via kubectl I have to do aws configure .
How to make other credentials transferred to kubectl ?
I think that I pass the --kubeconfig, I can pass the credentials, but I can't figure out how to do it
you can manage the different clusters with a single kubeconfig as well.
all you need change context for each account cluster with single file
kubectl config get-contexts
but if you still looking for other approaches then you can create kubeconfig for each account.
export KUBECONFIG=~/.kube/aws-account-1
aws eks update-kubeconfig --name my-clust --region us-east-1 --profile account-1
for second account
export KUBECONFIG=~/.kube/aws-account-2
aws eks update-kubeconfig --name my-clust --region us-east-1 --profile account-2
Now if you want to manage the cluster of account1, set kubeconfig to account1.
export KUBECONFIG=~/.kube/aws-account-1
kubectl get pods

Container Insights on Amazon EKS AccessDeniedException

I'm trying to add a Container Insight to my EKS cluster but running into a bit of an issue when deploying. According to my logs, I'm getting the following:
[error] [output:cloudwatch_logs:cloudwatch_logs.2] CreateLogGroup API responded with error='AccessDeniedException'
[error] [output:cloudwatch_logs:cloudwatch_logs.2] Failed to create log group
The strange part about this is the role it seems to be assuming is the same role found within my EC2 worker nodes rather than the role for the service account I have created. I'm creating the service account and can see it within AWS successfully using the following command:
eksctl create iamserviceaccount --region ${env:AWS_DEFAULT_REGION} --name cloudwatch-agent --namespace amazon-cloudwatch --cluster ${env:CLUSTER_NAME} --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy --override-existing-serviceaccounts --approve
Despite the serviceaccount being created successfully, I continue to get my AccessDeniedException.
One thing I found was the logs work fine when I manually add the CloudWatchAgentServerPolicy to my worker nodes, however this is not the implementation I would like and instead would rather have an automative way of adding the service account and not touching the worker nodes directly if possible. The steps I followed can be found at the bottom of this documentation.
Thanks so much!
For anyone running into this issue: within the quickstart yaml, there is a fluent-bit service account that must be removed from that file and created manually. For me I created it using the following command:
eksctl create iamserviceaccount --region ${env:AWS_DEFAULT_REGION} --name fluent-bit --namespace amazon-cloudwatch --cluster ${env:CLUSTER_NAME} --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy --override-existing-serviceaccounts --approve
Upon running this command and removing the fluent-bit service account from the yaml, delete and reapply al your amazon-cloudwatch namespace items and it should be working.

unable to get nodegroup info using eskctl

Total noob and have a runaway EKS cluster adding up $$ on AWS.
I'm having a tough time scaling down my cluster ad not sure what to do. I'm following the recommendations here: How to stop AWS EKS Worker Instances reference below
If I run:
"eksctl get cluster", I get the following:
NAME REGION EKSCTL CREATED
my-cluster us-west-2 True
unique-outfit-1636757727 us-west-2 True
I then try the next line "eksctl get nodegroup --cluster my-cluster" and get:
2021-11-15 15:31:14 [ℹ] eksctl version 0.73.0
2021-11-15 15:31:14 [ℹ] using region us-west-2
Error: No nodegroups found
I'm desperate to try and scale down the cluster, but stuck in the above command.
Seems everything installed and is running as intended, but the management part is failing! Thanks in advance! What am I doing wrong?
Reference --
eksctl get cluster
eksctl get nodegroup --cluster CLUSTERNAME
eksctl scale nodegroup --cluster CLUSTERNAME --name NODEGROUPNAME --nodes NEWSIZE
To completely scale down the nodes to zero use this (max=0 threw errors):
eksctl scale nodegroup --cluster CLUSTERNAME --name NODEGROUPNAME --nodes 0 --nodes-max 1 --nodes-min 0
You don't have managed node group therefore eksctl does not return any node group result. The same applies to aws eks cli.
...scaling down my cluster...
You can logon to the console, goto EC2->Auto Scaling Groups, locate the launch template and scale by updating the "Group details". Depends on how your cluster was created, you can look for the launch template tag kubernetes.io/cluster/<your cluster name> to find the correct template.

ecs-cli refers to old cluster after changing default profile; doesn't show EC2 instances

I've been using AWS's ECS CLI to spin clusters of EC2 instances up and down for various tasks. The problem I'm running into is that it seems to be referring to old information that I don't know how to change.
e.g., I just created a cluster, my-second-cluster successfully, and can see it in the AWS console:
$ ecs-cli up --keypair "my-keypair" --capability-iam --size 4 --instance-type t2.micro --port 22 --cluster-config my-second-cluster --ecs-profile a-second-profile
INFO[0001] Using recommended Amazon Linux 2 AMI with ECS Agent 1.45.0 and Docker version 19.03.6-ce
INFO[0001] Created cluster cluster=my-second-cluster region=us-east-1
INFO[0002] Waiting for your cluster resources to be created...
INFO[0002] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0063] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0124] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
VPC created: vpc-123abc
Security Group created: sg-123abc
Subnet created: subnet-123abc
Subnet created: subnet-123def
Cluster creation succeeded.
...but eci-cli ps returns an error referring to an old cluster:
$ ecs-cli ps
FATA[0000] Error executing 'ps': Cluster 'my-first-cluster' is not active. Ensure that it exists
Specifying the cluster explicitly (ecs-cli ps --cluster my-second-cluster --region us-east-1) returns nothing, even though I see the 4 EC2 instances when I log into the AWS console.
Supporting details:
Before creating this second cluster, I created a second profile and set it to the default. I also set the new cluster to be the default.
$ ecs-cli configure profile --access-key <MY_ACCESS_KEY> --secret-key <MY_SECRET_KEY> --profile-name a-second-profile
$ ecs-cli configure profile default --profile-name a-second-profile
$ ecs-cli configure --cluster my-second-cluster --region us-east-1
INFO[0000] Saved ECS CLI cluster configuration default.
It's unclear to me where these ECS profile and cluster configs are stored (I'd expect to see them as files in ~/.aws, but no), or how to manipulate them beyond the cli commands that don't give great feedback. Any ideas on what I'm missing?
The ECS CLI stores it's credentials at ~/.ecs/credentials.
When you create the initial profile it's name is default and is used by default. When you set a-second-profile to default, it sets the metadata to use a-second-profile by default but you still have a profile named default that points to the original creds.
My guess is that to see the first cluster you need to now specify a profile name since you changed the default. If you didn't give your initial profile a name then it will be default.
ecs-cli ps --ecs-profile default
If you deleted your cluster configuration you may need to add the cluster again and associate to the right profile:
ecs-cli configure --cluster cluster_name --default-launch-type launch_type --region region_name --config-name configuration_name
I hope that makes sense. Hopefully looking at how your commands update ~/.ecs/credentials be helpful.
Some resources:
ECS CLI Configurations

Error while executing aws eks commands on aws-cli

successfully installed the EKS CLI on the terminal. But when I try to execute the command
aws eks --us-east-1 region update-kubeconfig --name codefresh
it showing an error saying
aws: error: argument command: Invalid choice
It would be great if someone helps me with the proper solution.
You have an error in your call. You specify region with --region us-east-1 and not with --us-east-1 region