AWS EKS no exposed ports - amazon-web-services

So I've been struggling with the fact that I'm unable to expose any deployment in my eks cluster.
I got down to this:
My LoadBalancer service public IP never responds
Went to the load balancer section in my aws console
Load balancer is no working because my cluster node is not passing the heath checks
SSHd to my cluster node and found out that containers do not have ports associated to them:
This makes the cluster node fail the health checks, so no traffic is forwarded that way.
I tried running a simple nginx container manually, without kubectl directly in my cluster node:
docker run -p 80:80 nginx
and pasting the node public IP in my browser. No luck:
then I tried curling to the nginx container directly from the cluster node via ssh:
curl localhost
And I'm getting this response: "curl: (7) Failed to connect to localhost port 80: Connection refused"
Why are containers in the cluster node not showing ports?
How can I make the cluster node pass the load balancer health checks?
Could it have something to do with the fact that I created a single node cluster with eksctl?
What other options do I have to easily run a kubernetes cluster in AWS?

This is something in the middle between answer and question, but I hope it will help you.
Im using Deploying a Kubernetes Cluster with Amazon EKS guide for years when it comes to create EKS cluster.
For test purposes, I just spinned up new cluster and it works as expected, including accessing test application using external LB ip and passing health checks...
In short you need:
1. create EKS role
2. create VPC to use in EKS
3. create stack (Cloudformation) from https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/amazon-eks-vpc-sample.yaml
4. Export variables to simplify further cli command usage
export EKS_CLUSTER_REGION=
export EKS_CLUSTER_NAME=
export EKS_ROLE_ARN=
export EKS_SUBNETS_ID=
export EKS_SECURITY_GROUP_ID=
5. Create cluster, verify its creation and generating appropriate config.
#Create
aws eks --region ${EKS_CLUSTER_REGION} create-cluster --name ${EKS_CLUSTER_NAME} --role-arn ${EKS_ROLE_ARN} --resources-vpc-config subnetIds=${EKS_SUBNETS_ID},securityGroupIds=${EKS_SECURITY_GROUP_ID}
#Verify
watch aws eks --region ${EKS_CLUSTER_REGION} describe-cluster --name ${EKS_CLUSTER_NAME} --query cluster.status
#Create .kube/config entry
aws eks --region ${EKS_CLUSTER_REGION} update-kubeconfig --name ${EKS_CLUSTER_NAME}
Can you please check article and confirm you havent missed any steps during installation?

Related

Is there a way to point my machine's kubectl to a K8s cluster hosted on EC2 Instances?

I want to point my machine's kubectl to a cluster running on EC2 instances.
When using EKS, it's fairly easy, we just need to run:
aws eks --region=<REGION> update-kubeconfig --name <CLUSTER_NAME>
However, how can I get the same behaviour if my master-nodes and worker-nodes are hosted on EC2 instances directly?
Firstly download your cluster config from master node to your machine. Then you need to merge downloaded config with your existing config on your local.
This post will help you how to merge them.
Then you can manage your configs with kubectx.

Is it possible to add a elb (cloud provider) to an existing kubernetes cluster running on RHEL8 EC2?

i have a cluster running on aws ec2 and not a managed EKS, i'm trying to add a loadbalancer to the cluster without restarting it or initializing a new node, is that possible ? i've already set the permission and tags related to this post https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
But the thing is that we must add the flag --cloud-provider=aws to the Kubelet before adding the node to the cluster.
Is there any other options or other way to do it ?
[kubectl get nodes][1]
You can try using AWS load balancer controller, it works with both managed and self-managed K8s clusters https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/
Zee

Cannot connect to the EKS cluster after deploying with bitbucket and terraform

when I deploy my EKS cluster in command line with terraform from my terminal. I have no problem
the deployment goes very well I can connect to the cluster and execute commands like kubectl get svc .
However when I go through my bitbucket pipeline the deployment goes well but if I try to connect on command line to my EKS cluster to execute commands.
_ kubectl get svc
Unable to connect to the server: dial tcp: lookup 5078015BC6DAFAE1391368A19FC9.gr7.eu-central-1.eks.amazonaws.com on 192.168.43.1:53: no such host
So i cheked DNS side on my VPC on EKS ec2 instance:
I click on the vpc
Action Edit DNS hostnames
And i checked if DNS hostnames was enabled
It was activated .
thanks for your help and your advice
I created my cluster manually and was facing same issue, in my case the error was coming because of private cluster. I selected Private Cluster option while creating the cluster. If you can re-create the cluster with public access or use bastion host the error would be resolved. Hope it helps, thanks
EKS Cluster Setup

AWS Elastic Kubernetes Service: how to expose a container/pod to an Elasticsearch cluster inside AWS?

Setup: there is an EKS cluster running with 2 worker nodes and there is a separate Elasticsearch cluster as well in the same VPC as the worker nodes. How can I / shall I open a connection between a logstash container in a pod on a worker node to the Elasticsearch cluster? I guess a service is needed for logstash but what type and how to set it up? Thanks for answering!
As a comment alluded to, you can do this via standard AWS Security Group adjustments. ie- Make sure that your worker nodes' security group allows Outbound connectivity to your Elasticsearch cluster on port 9200 or whatever port you're using, and make sure that your Elasticsearch Cluster Ec2 instances allow Inbound traffic from your Worker Nodes on port 9200. This assumes you're not using AWS's nifty new Security Group per Pod functionality, which allows you to get even more granular with your rules.
And then to test, you can exec into your logstash pod and curl your elasticsearch cluster endpoint. You can install curl if it's not already installed.
kubectl exec -it <logstash-pod> /bin/bash
curl -XGET <elasticsearch-url>/_cluster/health

Use AWS ALB on docker swarm

Does anyone tried to configure AWS Application load balancing to docker swarm running on EC2 instances not on EC2 CS, because most documentation shows only Docker for AWS, I saw some post that you must include the ARN on the label but I think it's still not working. Also, the DNS on the load balancer does not show the nginx even though port 80 is already allowed on our security group
This is the command I used when running the services,
docker service create --name=test --publish 80:80 --publish 444:80 --constraint 'engine.labels.serverType == dev' --replicas=2 --label com.docker.aws.lb.arn="<arn-value-here>" nginx:alpine
Current Setup:
EC2 instance
Subnet included on the loadbalancer
Any insights will be much appreciated.