Helm install of Istio ingress gateway in EKS cluster opens two node ports to the world - istio

When I install the Istio ingress gateway with Helm,
helm install -f helm/values.yaml istio-ingressgateway istio/gateway -n default --wait
I see several inbound rules opened in security group associated with the nodes in the cluster. Two of these rules open traffic to the world,
Custom TCP TCP 30111 0.0.0.0/0 Custom TCP TCP 31760 0.0.0.0/0
Does anyone know what services run on these ports and whether they need to be opened to the world?
I didn't expect to have ports on the compute nodes opened to public access.

Related

AWS NLB Ingress for SMTP service in Kubernetes (EKS)

I'm trying to deploy an SMTP service into Kubernetes (EKS), and I'm having trouble with ingress. I'd like not to have to deploy SMTP, but I don't have that option at the moment. Our Kubernetes cluster is using ingress nginx controller, and the docs point to a way to expose TCP connection. I have TCP exposed on the controller via a configmap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-tcp
namespace: ingress-nginx
data:
'25': some-namespace/smtp:25
The receiving service is listening on port 25. I can verify that the k8s part is working. I've used port forwarding to forward it locally and verified with telnet that it's working. I can also access the SMTP service with telnet from a host in the VPC. I just can not access it from the NLB. I've tried 2 different setups:
the ingress-nginx controller nlb.
provisioning a separate nlb that points to the endpoint IP of the service. The TGs are healthy, and I can access the service from a host in the same vpc, that's not in the cluster.
I've verified a least a few dozen times that the security groups are open to all traffic on port 25.
Does anyone have any insights on how to access to expose the service through the NLB?

create docker swarm on AWS

I created 3 ec2 instances and now I want to create a docker swarm
the ec2 instances have a security group with
TCP 2377 0.0.0.0/0
TCP 7946 0.0.0.0/0
TCP 8501 0.0.0.0/0
UDP 4789 0.0.0.0/0
UDP 7946 0.0.0.0/0
SSH 7946 0.0.0.0/0
HTTPS 443 0.0.0.0/0
HTTP 80 0.0.0.0/0
open
after creating the resources I run the following code
docker swarm init --advertise-addr ManagerIP
and then on the other instance I past the join command
I then create a network and 2 services
docker network create --driver overlay mydrupal
docker service create --name psql --network mydrupal -e POSTGRES_PASSWORD=postgres postgres:11
docker service create --name drupal --network mydrupal -p 80:80 drupal:8
all this at the moment is running either on host1 (the swarm leader) or host1 and host2 (one of the workers)
If I then go to the browser and paste the public ip of the ec2 and trying to configure the postgres database init I get an error or I can get in and then I cannot connect from the other instances...
I am not sure if this is a security group issue or something else.
update
If I remove the workers from the swarm I can configure the database and run drupal
If then I add the workers to the swarm I can't connect using their public ips
update2
opened all traffic, all ports on the security group and still nothing..

docker-machine shows that my ec2 docker nodes are timing out

I am experiencing this issue with my docker nodes. I have a docker swarm cluster with three nodes. I am able to SSH to each single node. However,
when I run docker-machine ls, it shows my amazonec2 drivers state as "Timeout" even though I am able to SSH.
In the Docker Swarm documentation it states the following ports should be opened:
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
A timeout in AWS normally indicates that a security group is not allowing inbound access on the port(s) that are trying to be accessed.
Ensure you add these ports with a source of either the subnet range(s) of the other host(s) or by referencing a security group that is attached to the host(s).

Good way to whitelist ingress traffic for JHub on EKS (AWS kubernetes)?

Context:
I have a EKS cluster (EKS is AWS' managed kubernetes service).
I deploy an application to this EKS cluster (JupyterHub) via helm.
I have a VPN server.
Users of my application (JupyterHub on EKS) must connect to the VPN server first before they access the application.
I enforce this by removing the 0.0.0.0/0 "allow all" ingress rule on the elastic load balancer, and adding an ingress rule that allows traffic from the VPN server only.
The elastic load balancer referenced above is created implicitly by the JupyterHub application that gets deployed to EKS via helm.
Problem:
When I deploy changes to the running JuypyterHub application in EKS, sometimes [depending on the changes] the ELB gets deleted and re-created.
This causes the security group associated with the ELB to also get re-created, along with the ingress rules.
This is not ideal because it is easy to overlook this when deploying changes to JupyterHub/EKS, and a developer might forget to verify the security group rules are still present.
Question:
Is there a more robust place I can enforce this ingress network rule (only allow traffic from VPN server) ?
Two thoughts I had, but are not ideal:
Use a NACL. This won't work really, because it adds a lot of overhead managing the CIDRs due to the fact NACL is stateful and operates at subnet level.
I thought to add my ingress rules to the security group associated with the EKS worker nodes instead, but this won't work due to the same problem. When you delpoy an update to Jupyterhub/EKS, and if the ELB gets replaced, a "allow all traffic" ingress rule is implicitly added to the EKS worker node security group (allowing all traffic from the ELB). This would override my ingress rule.
It sounds like you're using a LoadBalanced service for JupyterHub. A better way of handling ingress into your cluster would be to use a single ingress controller (like the nginx ingress controller) - deployed via a different helm chart.
Then, deploy JupyterHub's helm chart but use a custom value passed into the release with the --set parameter to tell it to use a ClusterIP service instead of LoadBalancer type. This way, changes to your JupyterHub release that might re-create the ClusterIP service won't matter - as you'll be using Ingress Rules for the Ingress Controller to manage ingress for JupyterHub instead now.
Use the ingress rule feature of the JupyterHub helm chart to configure ingress rules for your nginx ingress controller: see docs here
The LoadBalancer generated by the Nginx Ingress Controller will instead remain persistent/stable and you can define your Security Group ingress rules on that separately.
Effectively you're decoupling ingress into EKS apps from your JupyterHub app by using the Ingress Controller + ingress rules pattern of access.
On the subject of ingress and LoadBalancers
With EKS/Helm and load balanced services the default is to create an internet facing elastic load balancer.
There are some extra annotations you can add to the service definition that will instead create it as an internal facing LoadBalancer.
This might be preferable to you for your ingress controller (or anywhere else you want to use LoadBalancer services), as it doesn't immediately expose the app to the open internet. You mentioned you already have VPN access into your VPC network, so users can still VPN in, and then hit the LoadBalancer hostname.
I wrote up a guide a while back on installing the nginx ingress controller here. It talks about doing this with DigitalOcean Kubernetes, but is still relevant for EKS as its just a helm chart.
There is another post I did which talks about some extra configuration annotations you can add to your ingress controller service that automatically creates the specific port range ingress security group rules at the same time as the load balancer. (This is another option for you if you find each time it gets created you are having to manually update the ingress rules on the security group). See the post on customising Ingress Controller load balancer and port ranges for ingress here
The config values you want for auto-configuring your LoadBalancer ingress source ranges and setting it to internal can be set with:
controller.service.loadBalancerSourceRanges
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Hope that helps!

Connecting Kubernetes cluster to Redis on the the same GCP network

I'm running a Kubernetes cluster and HA redis VMs on the same VPC on Google Cloud Platform. ICMP and traffic on all TCP and UDP ports is allowed on the subnet 10.128.0.0/20. Kubernetes has its own internal network, 10.12.0.0/14, but the cluster runs on VMs inside of 10.128.0.0/20, same as redis VM.
However, even though the VMs inside of 10.128.0.0/20 see each other, I can't ping the same VM or connect to its ports while running commands from Kubernetes pod. What would I need to modify either in k8s or in GCP firewall rules to allow for this - I was under impression that this should work out of the box and pods would be able to access the same network that their nodes were running on?
kube-dns is up and running, and this k8s 1.9.4 on GCP.
I've tried to reproduce your issue with the same configuration, but it works fine. I've create a network called "myservernetwork1" with subnet 10.128.0.0/20. I started a cluster in this subnet and created 3 firewall rules to allow icmp, tcp and udp traffic inside the network.
$ gcloud compute firewall-rules list --filter="myservernetwork1"
myservernetwork1-icmp myservernetwork1 INGRESS 1000 icmp
myservernetwork1-tcp myservernetwork1 INGRESS 1000 tcp
myservernetwork1-udp myservernetwork1 INGRESS 1000 udp
I allowed all TCP, UDP and ICMP traffic inside the network.
I created a rule for icmp protocol for my sub-net using this command:
gcloud compute firewall-rules create myservernetwork1-icmp \
--allow icmp \
--network myservernetwork1 \
--source-ranges 10.0.0.0/8
I’ve used /8 mask because I wanted to cover all addresses in my network. Check your GCP firewall settings to make sure those are correct.