My question here is maybe simple but I'm missing something.
I have a GKE cluster and I want to connect it to an on-premise database.
GKE - VPC-native cluster
GKE version - 1.17.15-gke.800
POD ip range - 10.0.0.0/14
SERVICES ip range - 10.4.0.0/20
I have a cloud VPN working (policy based connection) and I have a connection from Google's network to the onpremise network. I've tested it from a test instance and from the instances of the GKE cluster.
I don't have connection only from the pods.
What am I missing here ?
I managed to find the right answer:
Egress traffic from GKE Pod through VPN
Got it from here, I needed to enable Network Policy for master + nodes and then used the ip-masq-agent config to create a Configmap, then you must delete the pods of ip-masq-agent and when they come up with the new config everything is working fine.
Related
I've created
1 GKE autopilot cluster
1 classic VPN Tunnel to on-premise network
Network Connectivity tests suggest, that my VPN is working (I suppose)
Result of Connectivity Test
When trying to traceroute from a pod however:
traceroute from pod
Pod IP is 10.4.0.85
I have a host project (network) which also contains VPN Tunnel and routing. This network is shared with another project, which contains the GKE autopilot cluster.
VPN tunnel show to be working from both ends. GKE Nodes are pingable from on-premise network.
Since it is a autopilot cluster, I cannot confirm, whether connection from nodes to on-premise is working.
I expected my traceroute to show successful connection to on premise IP, or at least to VPN Endpoint of on-premise network.
It only shows one hop to 10.4.0.65 (this is in the CIDR of my GKE Cluster, but I do not know where it belongs to)
I've taken a look at the IP masquerading as described here without success.
And now I am lost. I suppose, my packages (from traceroute) are not even leaving the GKE cluster. But why that is, I cannot tell.
Id be grateful to get a hint in the right direction.
In this question, the author says that the gke cluster is not available from other subnets in the VPC.
BUT, that is exactly what I need to do. I've added detail below, all suggestions welcome.
I created a VPC in Google Cloud with custom sub-nets. I have a subnet in us-east1 and another in us-east4. Then, I created a VPC-native private GKE cluster in the same VPC in the us-east4 subnet.
[added details]
GKE in us-east4
endpoint 10.181.15.2
control plane 10.181.15.0/28
pod address range 10.16.0.0/16
service address range 10.17.0.0/22
VPC subnet in us-east4
10.181.11.0/24
VPC subnet in us-east1
10.171.1.0/24
I added 10.171.1.0/24 as a Control Plane authorized network, and I added 10.171.1.0/24 to the automatically created firewall rule.
But I still can't use kubectl from the instance in the 10.171.1.0/24 subnet.
What I see when trying to use kubectl from a VM in us-east4 10.181.11.7
On this VM, I set the context with kubectl config use-context <correct gke context> and I have gcloud configured correctly. Then,
kubectl get pods correctly gives a list of pods in the gke cluster.
from a VM in us-east4 10.171.1.0 subnet, which is set up in the same way, kubectl get pods times out with an error that it's unable to reach the endpoint. The message is:
kubectl get pods
Unable to connect to the server: dial tcp 10.181.15.2:443: i/o timeout
This seems like a firewall problem, but I've been unable to find a solution, despite the abundance of GKE documentation out there. It could be a routing problem, but I thought VPC-native GKE cluster would take care of the routes automatically?
By default, the private endpoint for the control plane is accessible from clients in the same region as the cluster. If you want clients in the same VPC but located in different regions to access the control plane, you'll need to enable global access using the --enable-master-global-access option. You can do this when you create the cluster or you can update an existing cluster.
Assuming I have a custom VPC with IP ranges 10.148.0.0/20
This custom VPC has firewall rules to allow-internal so the service inside those IP ranges can communicate to each other.
After the system grows I need to connect to some on-premises network by using Classic Cloud VPN, already create Cloud VPN (the on-premises side configuration already configured by someone) and the VPN Tunnel already established (with green checkmarks).
I also can ping to on-premises IP right now (let's say ping to 10.xxx.xxx.xxx where this is not GCP internal/private IP but on-premises private IP) using compute engine created on custom VPC network.
The problem is all the compute engine instance spawn in custom VPC network can't communicate to the internet now (like doing sudo apt update) or even communicate to google cloud storage (using gsutil), but they can communicate using private IP.
I also can't spawn dataproc cluster on that custom VPC (I guess because it can't connect to GCS, since dataproc needs GCS for staging buckets).
Since I do not really know about networking stuff and relatively new to GCP, how to be able to connect to the internet on instances that I created inside custom VPC?
After checking more in-depth about my custom VPC and Cloud VPN I realize there's misconfiguration when I establish the Cloud VPN, I've chosen route-based in routing option and input 0.0.0.0/0 in Remote network IP ranges. I guess this routes sending all traffic to VPN as #John Hanley said.
Solved it by using policy-based in routing option and only add specific IP in Remote network IP ranges.
Thank you #John Hanley and
#guillaume blaquiere for pointing this out
I are currently running a Kubernetes cluster on GCP. The cluster has several pods. And I created a new VM in the same network. From Kubernetes pod can ping to the VM but can not connect via internal IP of VM. Please help me find solution for this issue. Thanks
I found solution for this issue. Create a firewall on GCP for VM to allow source from pod IP as 10.0.0.0/8
Could someone give a step-by-step procedure for connecting to elasticache.
I'm trying to connect to a redis elasticache node from inside my EC2 instance (sshed in). I'm getting Connection Timed Out errors each time, and I can't figure out what's wrong with how I've configured my AWS settings.
They are in different VPCs, but in my elasticache VPC, I have a custom TCP inbound rule at port 6379 to accept from anywhere. And the two VPCs share an Active Peer connection that I set up. What more am I intended to do?
EDIT:
I am trying to connect via the redis-cli command. I sshed in because I was originally trying to connect via the node-redis module since my EC2 instance hosts a node server. So officially my two attempts are 1. A scripted module and 2. The redis-cli command provided in the AWS documentation.
As far as I can tell, I have also set up the route tables correctly according to this: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html#route-tables-vpc-peering
You cannot connect to Elasticache from outside its VPC. It's a weird design decision on AWS' part, and although it's not documented well, it is documented here:
Amazon ElastiCache Nodes, deployed within a VPC, can never be accessed from the Internet or from EC2 Instances outside the VPC.
You can set your security groups to allow connections from everywhere, and it will look like it worked, but it won't matter or let you actually connect from outside the VPC (also a weird design decision).
In your Redis cluster properties you have a reference to the Security Group. Copy it.
In our EC2 instance you also have a Security Group. You should edit this Security Group and add the ID of the Redis Security Group as CIDR in the outbound connections + the port 6379.
This way the two Security Groups are linked and the connection can be established.
Two things we might forget when trying to connect to ElasticCache,
Configuring inbound TCP rule to allow incoming requests on port 6379
Adding EC2 security group in ElasticCache instance
Second one helped me.
Reference to (2) : https://www.youtube.com/watch?v=fxjsxtcgDoc&ab_channel=HendyIrawanSocialEnterprise
Here is step-by-step instructions for connection to Redis Elasticache cluster from EC2 inctance located in the same VPC as Elasticache:
Connect to a Elasticache Redis Cluster's Node