I have Kubernetes Master and Minions running on EC2 instances and was able to successfully deploy an example app with below commands
kubectl run hello-world --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-world --type=NodePort
Which is now available externally from port 30013:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world 10.43.95.16 <nodes> 8080:30013/TCP 1h
I'm now trying to access this by visiting EC2 instance private IP of this Kubernetes Minion node and its port number as 30013 but is not able to connect at all.
I've checked security group of AWS and this port is open and is attached to the EC2 instance. I cannot think of anything else that would stop accessing the application.
Is there any known issues with AWS networking with Kubernetes exposed services?
It should work (and it works on my cluster on AWS). Are you sure you are using the IP address of the eth0 interface and nop crb0 or something else? EC2 instances just have one interface and the public address is mapped, so from inside the EC2 is not much difference.
Also, you should be able to contact 10.43.95.16 on port 8080 or just use the DNS name. If you want to connect to other services from a k8s app, you should use that (no node crash will affect the communication, etc.)
Related
I have an architecture as shown below:
sorry for the mess with all the ports
So, the EC2 instance it is running and I am able to access the instance via SSH (port 22). I am also able to access the contents of the container which is running in the EC2 instance if I forward the ports via SSH. BUT, I am not able to access this same content if I try to connect via public IP of the EC2.
As you can see the security group is created and the ports allowed.
When I run in the EC2: sudo firewall-cmd --list-all I can see that the ports: 80/tcp 8080/tcp 8071/tcp 8063/tcp are allowed.
I am pretty new in AWS/Docker and cannot figure it out how to access container via public IP
I have tried updating security groups and also allowing ports in EC2 thinking that maybe firewall might block the communication but still the access was not possible
I am connecting 2 docker container one python app container and redis container with each other using ECS. When I ssh into ec2 instance and curl localhost:5000 it gives the output which means the container are running and python is connected with redis. I am trying to achieve the result with ECS Ec2
But when I search the public ip of ec2 it doesnot show anything. My task configuration setting:
and in python container I am having this setting, pulling the image from ecr and giving the port 5000:5000 and in link giving the name of redis so the it can connect to redis container. What setting Am I missing so that I can hit the python app container without doing ssh to the ec2-instance and doing curl localhost:5000
If the application is accessible on localhost then possible reasons are
Instance security group does not allow 5000 Port, so check the security group and allow HTTP traffic on port 5000.
An instance might be in a private subnet, but how you do ssh from bastion or directly? if directly then the issue should be resolved by following step 1.
Use the public IP that you used for ssh and it should work
http://ec2Public_Ip:5000
I'm using a managed AWS EKS Kubernetes cluster. For the worker nodes I have setup a node group within the EKS cluster with 2 worker nodes
These worker nodes get a public IP assigned automatically by EKS:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-0-129.eu-central-1.compute.internal Ready <none> 6d v1.14.7-eks-1861c5 10.0.0.129 1.2.3.4 Amazon Linux 2 4.14.146-119.123.amzn2.x86_64 docker://18.6.1
ip-10-0-1-218.eu-central-1.compute.internal Ready <none> 6d v1.14.7-eks-1861c5 10.0.1.218 5.6.7.8 Amazon Linux 2 4.14.146-119.123.amzn2.x86_64 docker://18.6.1
For this example let's assume that the values assigned automatically by AWS are 1.2.3.4 and 5.6.7.8.
When running a command from inside a pod running on the first node I can also see that this is the IP address with which external requests are being made:
$ curl 'https://api.ipify.org'
1.2.3.4
The issue that I'm facing now is that I would like to configure this IP address. Let's assume I have a service that I use from within the pod that I'm not in control of and that requires whitelisting via IP address.
I haven't found any way to specify a range of IP addresses to the node group (or the subnets setup for the VPC in which the cluster is located) from which AWS will pick an IP address.
Is there any other way to configure the worker nodes to use fixed IP addresses?
Currently it is not possible to associate Elastic IPs with instances running as part of EKS node group. However, I will provide you with a much better alternative that should be used over your setup which is essentially all public.
Firstly, run your worker nodes or node groups inside private subnets. This will give you the ability to route out to internet through a static ip.
To achieve static ip that you can whitelist on a desired service is by using NAT Gateway. Setup instructions. NAT gateway will be associated with an elastic IP which won't be changing.
Since you are running EKS, don't forget to modify aws-vpc-cni configuration with AWS_VPC_K8S_CNI_EXTERNALSNAT = true. This is essential for pods to correctly work and route out to internet. If set to true, the SNAT iptables rule and off-VPC IP rule are not applied, and these rules are removed if they have already been applied. For this your nodes must be running in a private subnet and connected to the internet through an AWS NAT Gateway or another external NAT device. more info on aws-vpc-cni
We have created an Instance Template with ubuntu operating system. Using the instance template, we have created instance group with 3 machines.
These 3 machines are behind a TCP Loadbalancer with 8080 port enabled.
We have run the below python command on first VM.
python -m SimpleHTTPServer 8000
We see one of the instance health (1/3) is successful and have tested with telnet command. Since, the SimpleHTTPServer is installed on one instance, it shows (1/3) instance is healthy.
telnet <Loadbalacer ip> 8000
However, when we run the above command from the 2nd VM in the same instance group, we see 'Connection refused'.
telnet XX.XX.XX.XX 8000
Trying XX.XX.XX.XX...
telnet: Unable to connect to remote host: Connection refused.
Also, the same service is accessible on other VMs running on other instance group. The service is not accessible within the same instance group.
We have verified the firewall rules and we have tested with both 'allow all' and 'Specified protocols and ports' Protocols and ports option.
The above usecase works fine on AWS Classic LoadBalancer, however this fails on GCP.
I have created a firewall rule, 'cluster-firewall-rule' with 'master-cluster-ports' as tag. This tag has been added as part of Network tags in the instance. This rule allows traffic for 8080 port.
What is the equivalent of AWS Classic Load Balancer in GCP?
GCP does not have equivalent for AWS Classic Load Balancer (CLB).
AWS CLB was the first load balancer service from AWS and was built with EC2-Classic, with subsequent support for VPC. AWS NLB and ALB services are the modern LBs. If you can, I suggest using one of them. See https://aws.amazon.com/elasticloadbalancing/features/#compare for comparison between them.
If you switch, then you could use GCP's corresponding load balancer services. See https://cloud.google.com/docs/compare/aws/networking.
For my benefit:
1) Are you migrating applications from AWS to GCP?
2) What is your use case for migrating applications from AWS to GCP?
I have a kubernetes cluster having a master and two minions.
I have a service running using the public IP of one of the minion as the external IP of the service.
I have a deployment which runs a POD providing the service.Using the docker IP of the POD I am able to access the service.
But I am not able to access it using the external IP and the cluster IP.
The security groups have the necessary ports open.
Can someone help on what I am missing here.The same setup works fine in my local VM cluster.
Easiest way to access the service is to use a NodePort, then assuming your security groups allow that port you can access the service via the public ip of the node:nodeport assigned.
Alternately and a better approach to not expose your nodes to the public internet is to setup the CloudProvider to be type AWS and create a service type LoadBalancer, then the service will be provisioned with an ELB publicly.