Api failure reason: AGENT - amazon-web-services

I am getting error while run the task in aws ecs cluster
Jncise

Related

How to create an Amazon Elastic Container Service Task Role to be configured to the task definition initialized by Fargate in ECS

I created a task by using Fargate and then would like to access to the container by means of aws ecs exec for debugging purpose, thus tried the following command with aws CLI to enable the execute command:
aws ecs update-service --cluster dictionary --service dictionary-service --region eu-north-1 --enable-execute-command --force-new-deployment
But encountered the following error:
An error occurred (InvalidParameterException) when calling the UpdateService operation: The service couldn't be updated because a valid taskRoleArn is not being used. Specify a valid task role in your task definition and try again.
Question: How to create an Amazon Elastic Container Service Task Role in IAM?

Kubectl logs : Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy in AWS EKS

Accidentally deleted the role in AWS console that created the cluster in the beginning but now I am able to access the cluster using kubectl get pods command but getting errors while accessing logs for any pod in a cluster.
AWS IAM Role has following permissions
AmazonEKS_CNI_Policy"
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKSClusterPolicy
AmazonEKSVPCResourceController
The user is already added in kube config auth map
Command used
kubectl logs "podname" -n "cluster_name"
Error
Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)

Code deployement in AWS says "overall deployment failed because too many individual instances failed deployment"

I was trying to deploy webserver in EC2 instance, this is the error i'm getting in deployment phase
I'm uploading code to s3 and implementing via CodeDeploy to EC2.
My Ec2 CodeDeploy agent service running.
even though I provided role for EC2 - Awscodedeployfullacces,ec2fullaccess,S3fullacess. And codedeploy-access for my deployment grp, I was implementing in a (amazon) linux machine.
Deployment events log in aws:
this is my yaml code:
and the corresponding codes in scripts/ folder.
Event Log failed:

Unable to deploy code on ec2 instance using codedeploy

I have single ec2 instance running on ubuntu server and I am trying to implement CI/CD flow using codedeploy and source is bit-bucket.I jave also installed codedeploy-agent on ec2 instance and it is installed and running successfully but whenever I am deploying code on ec2 deployment is failing with an error shown below:
The overall deployment failed because too many individual instances failed deployment, too few
healthy instances are available for deployment, or some instances in your deployment group are
experiencing problems.
In the CodeDeploy agent log file that I am accessing using less /var/log/aws/codedeploy-agent/codedeploy-agent.log showing below error:
ERROR [codedeploy-agent(31598)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller:
Missing credentials - please check if this instance was started with an IAM instance profile
I am unable to understand how can I overcome this error someone let me know.
CodeDeploy agent requires IAM permissions provided by IAM role/profile of your instance. The exact permissions needed are given in AWS docs:
Step 4: Create an IAM instance profile for your Amazon EC2 instances

Unable to kubectl apply -f to my EKS cluster : Access denied (Roles may not be assumed by root accounts)

I created a Kubernetes cluster on AWS EKS and now I'm trying to deploy an app using cmd :
kubectl apply -f deployment.yaml
But I'm getting this error :
An error occurred (AccessDenied) when calling the AssumeRole operation: Roles may not be assumed by root accounts.
Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 254
I'm new to AWS and EKS and when I did some Google research it says that it might be caused by the authenticated user in aws cli tool.
For information, my cli is connected with a user called aws_cli_user who has AdministratorAccess strategy.
And for creating k8s cluster, I created a role called EksSocaClusterRole who has AmazonEKSClusterPolicy.
I think I'm missing something to link both the role and the user so It can push things to the cluster correctly !