AWS EKS - Unable to read logs for any container from cli - amazon-web-services

I have deployed an AWS EKS Cluster, and I was able to read logs from cli without any issues. After deploying an application, I started getting this error from the CLI.
while running the logs command, I was not getting any output. but after waiting for a while . I started getting this error:
logs command : kubectl logs "appname" -n "namespace"
error: Error from server: Get https://x.x.x.x:10250/containerLogs/"namespace"/"appname": dial tcp x.x.x.x:10250: i/o timeout

after editing the security group for the server (cli) which is running the EKS cluster, adding the specific PORTs . it started working .

Related

Bridge to Kubernetes failed to find services

I am trying to get Bridge to Kubernetes to work with my aws eks cluster. I have the command line kubectl cmds working and can communicate to my eks cluster in vscode via the Kubernetes extension. So I believe my .kube/config is correct. When I hit "Kubernetes: Debug (Local Tunnel)" -> I get an error:
Failed to configure Bridge to Kubernetes: Failed to find any services running in the namespace <correct_namespace> of cluster <correct_cluster>
What am I missing? Everything I've seen shows that bridge to Kubernetes should be able to connect. Is there an additional EKS security policy Bridge to Kubernetes requires to work?

Jenkins localhost handshaking failure with AWS k8s cluster,

I am using local setup of Jenkins
I have already running AWS k8s cluster
I tried with adding kubeconfig file confuguration in Jenkins credentials
But when I try it from Jenkins Test Connection it gives me following error
Then I tried to follow the steps mentioned in StackOverflow_Ticket, even that as well giving me UnknowHostException.
Any idea what is missing ?

Error response from daemon: ........... connect: connection refused AWS ECS

Stopped reason "Error response from daemon: create ecs-parser-api-dev-45-allsorter-efs-c4c8df9386aaff820100: Post http://%2Frun%2Fdocker%2Fplugins%2Famazon-ecs-volume-plugin.sock/VolumeDriver.Create: dial unix /run/docker/plugins/amazon-ecs-volume-plugin.sock: connect: connection refused"
AWS ECS EC2 based:
It was working fine. suddenly a new revision came and it didn't run the new revision but it stopped with above error
This indicates a problem with the ECS volume plugin and not necessarily a permissions/security group issue between ECS and EFS.
Replace your instances and ensure you are running the latest ECS AMI (or ECS agent if you have a custom AMI). That should resolve the issue.

Cannot deploy using AWS code deploy : too few healthy instances are available for deployment

I am trying to deploy an application to an ec2 instace from s3 bucket . I created an instance with the required s3 permimssion and also a code deploy application with required ec2 permissions
When I try to deploy thought I get :
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS.
I shh into the ec2 instance to check the code deploy log and this is what I get in the :
2018-08-18 20:52:11 INFO [codedeploy-agent(2704)]: On Premises config file does not exist or not readable
2018-08-18 20:52:11 ERROR [codedeploy-agent(2704)]: booting child: error during start or run: Errno::ENETUNREACH - Network is unreachable - connect(2) - /usr/share/ruby/net/http.rb:878:in `initialize'
I tried changing the permissions , restarting the code deploy agent , creating a brand new codeDEploy application. Nothing seems to work.
In order for the agent to pick up commands from CodeDeploy, your host needs to have network access to the internet, which can be restricted by your EC2 security groups, VPC, configuration on your host, etc. To see if you have access, try pinging the CodeDeploy endpoint:
ping codedeploy.us-west-2.amazonaws.com
Though you should use the endpoint for the region your host is in - see here.
If you've configured the agent to use the proxy config, you may have to restart the agent like here.

Run storm nimbus on AWS EC2 cluster always Failed to get local hostname

I am trying to run storm on my AWS EC2 instances as a cluster. However, I got this error:
ERROR in ch.qos.logback.core.util.ContextUtil#550571f6 - Failed to get local hostname java.net.UnknownHostException: ip-xxx-xxx-xxx-xxx: ip-xxx-xxx-xxx-xxx: Name or service not known
I also got similar error (java.net.UnknownHostException) when try to run kafka-console-consumer.sh.
What's wrong with this? Thanks