Connecting Kubernetes cluster to Redis on the the same GCP network - google-cloud-platform

I'm running a Kubernetes cluster and HA redis VMs on the same VPC on Google Cloud Platform. ICMP and traffic on all TCP and UDP ports is allowed on the subnet 10.128.0.0/20. Kubernetes has its own internal network, 10.12.0.0/14, but the cluster runs on VMs inside of 10.128.0.0/20, same as redis VM.
However, even though the VMs inside of 10.128.0.0/20 see each other, I can't ping the same VM or connect to its ports while running commands from Kubernetes pod. What would I need to modify either in k8s or in GCP firewall rules to allow for this - I was under impression that this should work out of the box and pods would be able to access the same network that their nodes were running on?
kube-dns is up and running, and this k8s 1.9.4 on GCP.

I've tried to reproduce your issue with the same configuration, but it works fine. I've create a network called "myservernetwork1" with subnet 10.128.0.0/20. I started a cluster in this subnet and created 3 firewall rules to allow icmp, tcp and udp traffic inside the network.
$ gcloud compute firewall-rules list --filter="myservernetwork1"
myservernetwork1-icmp myservernetwork1 INGRESS 1000 icmp
myservernetwork1-tcp myservernetwork1 INGRESS 1000 tcp
myservernetwork1-udp myservernetwork1 INGRESS 1000 udp
I allowed all TCP, UDP and ICMP traffic inside the network.
I created a rule for icmp protocol for my sub-net using this command:
gcloud compute firewall-rules create myservernetwork1-icmp \
--allow icmp \
--network myservernetwork1 \
--source-ranges 10.0.0.0/8
I’ve used /8 mask because I wanted to cover all addresses in my network. Check your GCP firewall settings to make sure those are correct.

Related

Helm install of Istio ingress gateway in EKS cluster opens two node ports to the world

When I install the Istio ingress gateway with Helm,
helm install -f helm/values.yaml istio-ingressgateway istio/gateway -n default --wait
I see several inbound rules opened in security group associated with the nodes in the cluster. Two of these rules open traffic to the world,
Custom TCP TCP 30111 0.0.0.0/0 Custom TCP TCP 31760 0.0.0.0/0
Does anyone know what services run on these ports and whether they need to be opened to the world?
I didn't expect to have ports on the compute nodes opened to public access.

AWS: two instances: On one ec2 instance, curl the webservices running using nginx doocker image on another ec2 instance: get Connection refused

As the title discussed, how do I set Netwrok ACL on both ec2 instances to allow curl from one instance to another?
For my case, the user data did not work for some reason when I creating the VM2 which would hosting Nginx servers. I manually installed the Nginx and ran it. For NACL:
VM 1:
Inbound/Outbound Customs TCP 1024-65535 IP of VM 2 Allow
VM 2
Inbound Custom TCP port of Nignx services IP of VM 1 Allow
Outbound Custom TCP 1024-65535 IP of VM 1 Allow
I hope this help.

Why GCP loadbalancer frontend has only two ports enabled?

I am trying to create a load balancer in GCP. I have created two instance groups and each instance group has single vm attached to itself. One vm is having a port 80 and another vm is having a port enabled at 86.
The moment I create a load balancer, I find a frontend ip configuration always enabled at 80.
I am looking forward to something like this, ip:80 and ip:86. Since I am new to GCP, I am struggling with this part
A forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud load balancer. With Google cloud you can create a single forwarding rule with a single IP by adding 2 ports separated by comma.
This port limitation for the TCP proxy load balancer and is due to the way TCP proxy load balancers are managed within the GCP internal infrastructure. It is not possible to use any port outside of this list.
For example:
Create a named port for the instance group.
gcloud compute instance-groups set-named-ports us-ig2
--named-ports tcp110:110
--zone us-east1-b
gcloud compute health-checks create tcp my-tcp-health-check --port 110

GKE VPN to on-premise database

My question here is maybe simple but I'm missing something.
I have a GKE cluster and I want to connect it to an on-premise database.
GKE - VPC-native cluster
GKE version - 1.17.15-gke.800
POD ip range - 10.0.0.0/14
SERVICES ip range - 10.4.0.0/20
I have a cloud VPN working (policy based connection) and I have a connection from Google's network to the onpremise network. I've tested it from a test instance and from the instances of the GKE cluster.
I don't have connection only from the pods.
What am I missing here ?
I managed to find the right answer:
Egress traffic from GKE Pod through VPN
Got it from here, I needed to enable Network Policy for master + nodes and then used the ip-masq-agent config to create a Configmap, then you must delete the pods of ip-masq-agent and when they come up with the new config everything is working fine.

docker-machine shows that my ec2 docker nodes are timing out

I am experiencing this issue with my docker nodes. I have a docker swarm cluster with three nodes. I am able to SSH to each single node. However,
when I run docker-machine ls, it shows my amazonec2 drivers state as "Timeout" even though I am able to SSH.
In the Docker Swarm documentation it states the following ports should be opened:
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
A timeout in AWS normally indicates that a security group is not allowing inbound access on the port(s) that are trying to be accessed.
Ensure you add these ports with a source of either the subnet range(s) of the other host(s) or by referencing a security group that is attached to the host(s).