I have two machines in the same VPC (under same subnet range) in GCP. I want to ping MAC address from one instance to another (ie. layer 2 connection). Is this supported in GCP?
If not, is GRE tunnel supported between the two VMs in the above configuration or any other tunneling?
My mail goal is to establish a layer 2 connection.
Andromeda (Google's Network) is a Software Defined Networking (SDN). Andromeda's goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization.
Hence, Andromeda itself is not a Cloud Platform networking product; rather, it is the basis for delivering Cloud Platform networking services with high performance, availability, isolation, and security. For example, Cloud Platform firewalls, routing, and forwarding rules all leverage the underlying internal Andromeda APIs and infrastructure.
Also, By default, the instances are configured with a 255.255.255.255 mask (to prevent instance ARP table exhaustion), and when a new connection is initiated, the packet will be sent to the subnet’s gateway MAC address, regardless if the destination IP is outside or within the subnet range. Thus, the instance might need to make an ARP request to resolve the gateway’s MAC address first.
Unfortunately Google doesn't allow GRE traffic[1].
So, my recommendation is to run some test like iperf or MTR between them in order to validate layer 2.
You can not have L2 connectivity this out of the box. However, you can setup a VXLAN or other kind of tunnels between VMs if you really need L2 connectivity for some odd reason. I've written a blog about how to do this: https://samos-it.com/posts/gce-vm-vxlan-l2-connectivity.html (Copy pasting the main pieces below)
Create the VMs
In this section you will create 2 Ubuntu 20.04 VMs
Let's start by creating vm-1
gcloud compute instances create vm-1 \
--image-family=ubuntu-2004-lts --image-project=ubuntu-os-cloud \
--zone=us-central1-a \
--boot-disk-size 20G \
--boot-disk-type pd-ssd \
--can-ip-forward \
--network default \
--machine-type n1-standard-2
Repeat the same command creating vm-2 this time:
gcloud compute instances create vm-2 \
--image-family=ubuntu-2004-lts --image-project=ubuntu-os-cloud \
--zone=us-central1-a \
--boot-disk-size 20G \
--boot-disk-type pd-ssd \
--can-ip-forward \
--network default \
--machine-type n1-standard-2
Verify that SSH to both VMs is available and up. You might need o be patient.
gcloud compute ssh root#vm-1 --zone us-central1-a --command "echo 'SSH to vm-1 succeeded'"
gcloud compute ssh root#vm-2 --zone us-central1-a --command "echo 'SSH to vm-2 succeeded'"
Setup VXLAN mesh between the VMs
In this section, you will be creating the VXLAN mesh between vm-1 and vm-2 that you just created.
Create bash variables that will be used for setting up the VXLAN mesh
VM1_VPC_IP=$(gcloud compute instances describe vm-1 \
--format='get(networkInterfaces[0].networkIP)')
VM2_VPC_IP=$(gcloud compute instances describe vm-2 \
--format='get(networkInterfaces[0].networkIP)')
echo $VM1_VPC_IP
echo $VM2_VPC_IP
Create the VXLAN device and mesh on vm-1
gcloud compute ssh root#vm-1 --zone us-central1-a << EOF
set -x
ip link add vxlan0 type vxlan id 42 dev ens4 dstport 0
bridge fdb append to 00:00:00:00:00:00 dst $VM2_VPC_IP dev vxlan0
ip addr add 10.200.0.2/24 dev vxlan0
ip link set up dev vxlan0
EOF
Create the VXLAN device and mesh on vm-2
gcloud compute ssh root#vm-2 --zone us-central1-a << EOF
set -x
ip link add vxlan0 type vxlan id 42 dev ens4 dstport 0
bridge fdb append to 00:00:00:00:00:00 dst $VM1_VPC_IP dev vxlan0
ip addr add 10.200.0.3/24 dev vxlan0
ip link set up dev vxlan0
EOF
Start a tcpdump on vm-1
gcloud compute ssh root#vm-1 --zone us-central1-a
tcpdump -i vxlan0 -n
In another session ping vm-2 from vm-1 and take a look at tcpdump output. Notice the arp.
gcloud compute ssh root#vm-1 --zone us-central1-a
ping 10.200.0.3
Related
I configured a Compute Engine instance with only an internal IP (10.X.X.10). I am able to ssh into it via gcloud with IAP with tunneling, access and copy files storage via Private Google Access and VPC was set up with no conflicting IP ranges:
gcloud compute ssh --zone "us-central1-c" "vm_name" --tunnel-through-iap --project "projectXXX"
Now I want to open Jupyter notebook without creating an external IP in the VM.
Identity-Aware Proxy (IAP) is working well, Private Google Access also. After that I enabled a NAT Gateway, that generated an external IP (35.X.X.155).
I configured Jupyter by running jupyter notebook --generate-config, set up a password "sha"
Now I run Jupyter by typing this on gcloud SSH:
python /usr/local/bin/jupyter-notebook --ip=0.0.0.0 --port=8080 --no-browser &
Replacing:http://instance-XXX/?token=abcd
By:http://35.X.X.155/?token=abcd
But the external IP is not accessible, not even in the browser, neither in http nor in https. Note that I'm not considering using a Network Load Balancing, because it's not necessary.
Ping 35.X.X.155 works perfectly
I also tried jupyter notebook --gateway-url=http://NAT-gateway:8888
without success
Look at this as an alternative to a bastion (VM with external IP)
Any ideas on how to solve this issue ?
UPDATE: Looks like I have to find a way to SSH into the NAT Gateway.
What you are trying to do can be accomplished using IAP for TCP forwarding, and there is no need to use NAT at all in this scenario. Here are the steps to follow:
Ensure you have ports 22 and 8080 allowed in the project's firewall:
gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
allow-8080-ingress-from-iap default INGRESS 1000 tcp:8080 False
allow-ssh-ingress-from-iap default INGRESS 1000 tcp:22 False
On your endpoint's gcloud CLI, log in to GCP and set the project to where the instance is running:
gcloud config set project $GCP_PROJECT_NAME
Check if you already have SSH keys generated in your system:
ls -1 ~/.ssh/*
#=>
/. . ./id_rsa
/. . ./id_rsa.pub
If you don't have any, you can generate them with the command: ssh-keygen -t rsa -f ~/.ssh/id_rsa -C id_rsa
Add the SSH keys to your project's metadata:
gcloud compute project-info add-metadata \
--metadata ssh-keys="$(gcloud compute project-info describe \
--format="value(commonInstanceMetadata.items.filter(key:ssh-keys).firstof(value))")
$(whoami):$(cat ~/.ssh/id_rsa.pub)"
#=>
Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME].
Assign the iap.tunnelResourceAccessor role to the user:
gcloud projects add-iam-policy-binding $GCP_PROJECT_NAME \
--member=user:$USER_ID \
--role=roles/iap.tunnelResourceAccessor
Start an IAP tunnel pointing to your instance:port and bind it to your desired localhost port (in this case, 9000):
gcloud compute start-iap-tunnel $INSTANCE_NAME 8080 \
--local-host-port=localhost:9000
Testing if tunnel connection works.
Listening on port [9000].
At this point, you should be able to access your Jupyter Notebook in http://127.0.0.1:9000?token=abcd.
Note: The start-iap-tunnel command is not a one-time running command and should be issued and kept running every time you want to access your Jupyter Notebook implementation.
I have created a private cluster in gke with the follwoing
gcloud container clusters create private-cluster-0 \
--create-subnetwork name=my-subnet-0 \
--enable-master-authorized-networks \
--enable-ip-alias \
--enable-private-nodes \
--enable-private-endpoint \
--master-ipv4-cidr 172.16.0.32/28 \
--zone us-central1-a
Then I did
gcloud container clusters get-credentials --zone us-central1-a private-cluster-0
I was trying to install a helm chart from my local machine but I got the following error:
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "https://172.16.0.34/version?timeout=32s": dial tcp 172.16.0.34:443: i/o timeout
Can anyone please tell me how to resolve this error.
How to deploy an helm chart from a local machine to a private cluster in gke?
You created a private cluster and trying to install helm from local machine.
This won't work because 172.16.0.0/12 range is non-routable, your PC is looking for the cluster in your own LAN.
You can find information on accessing private GKE clusters on google docs.
There are also more general tutorials on installing helm on GKE from google and medium.
First you need basic connectivity to access your private cluster.
For example, SSH to a VM on a subnet that master-ipv4-cidr allows.
I had this basic connectivity but was still unable to install a helm chart as the install couldn't access services within the cluster.
I could only see this issue after adding verbosity to helm install and logging the output.
helm install -v10 my-chart >log.txt 2>&1
With the get-credentials command
gcloud container clusters get-credentials --zone us-central1-a private-cluster-0
Try adding the argument --internal-ip
This controls whether to use the internal IP address of the cluster endpoint. It made the difference for me.
I have GKE cluster that I created with following command:
$ gcloud container clusters create stage1 \
--enable-ip-alias \
--release-channel stable \
--zone us-central1 \
--node-locations us-central1-a,us-central1-b
and I also created a redis instance with following command:
$ gcloud redis instances create redisbox --size=2 --region=us-central1 --redis-version=redis_5_0
I have retrieved the IP address of the redis instance with:
$ gcloud redis instances describe redisbox --region=us-central1
I have updated this IP in my PHP application, built my docker image , created the pod in GKE cluster. When pod is created the container throws following error
Connection to Redis :6379 failed after 2 failures.Last Error : (110) Operation timed out
Note 1: This is working application in hosted environment and we are migrating to Google Cloud
Note 2: GKE and Redis instance is in same region
Note 3: Enabled IP aliasing in cluster
After reproducing this VPC-native GKE cluster and Redis instance with your gcloud commands, I could check that both the nodes and their pods can reach the redisbox host, for example with ncat in a debian:latest pod:
$ REDIS_IP=$(gcloud redis instances describe redisbox --format='get(host)' --region=us-central1)
$ gcloud container clusters get-credentials stage1 --region=us-central1
$ kubectl exec -ti my-debian-pod -- /bin/bash -c "ncat $REDIS_IP 6379 <<<PING"
+PONG
Therefore, I suggest that you try performing this lower-level reachability test in case there there is an issue with the specific request that your PHP application is doing.
I have written a small script which creates VPC, firewall-rule, and instance. I gave parameters to script. but instead of taking parameter for firewall-rule it takes instancename2 value in firewall name field which.
ZONE=$2
MACHINE_TYPE=$3
IMAGE_FAMILY=$4
IMAGE_PROJECT=$5
BOOT_DISK_SIZE=$6
BOOT_DISK_TYPE=$7
NETWORK_NAME=$8
FIREWALL_RULE=$9
FIREWALL_NAME=$10
TAGS=$11
gcloud compute networks create $NETWORK_NAME --subnet-mode=auto
gcloud compute firewall-rules create $FIREWALL_NAME --network=$NETWORK_NAME --allow=$FIREWALL_RULE --source-tags=$TAGS
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--machine-type=$MACHINE_TYPE \
--image-family=$IMAGE_FAMILY \
--image-project=$IMAGE_PROJECT \
--boot-disk-size=$BOOT_DISK_SIZE \
--boot-disk-type=$BOOT_DISK_TYPE \
--network-interface network=$NETWORK_NAME,no-address \
--tags=$TAGS \
command : bash network.sh myvm us-west1-a f1-micro ubuntu-1810 ubuntu-os-cloud 10 pd-ssd mynetwork tcp:80 myrule mytag
output :
Created .
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
mynetwork AUTO REGIONAL
Instances on this network will not be reachable until firewall rules
are created. As an example, you can allow all internal traffic between
instances as well as SSH, RDP, and ICMP by running:
$ gcloud compute firewall-rules create <FIREWALL_NAME> --network mynetwork --allow tcp,udp,icmp --source-ranges <IP_RANGE>
$ gcloud compute firewall-rules create <FIREWALL_NAME> --network mynetwork --allow tcp:22,tcp:3389,icmp
Creating firewall...⠛Created
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
myvm0 mynetwork INGRESS 1000 tcp:80 False
Created.
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
myvm us-west1-a f1-micro 10.138.0.2 RUNNING
please check the name of firewall created (below 'creating firewall...done.'). It's not what i provided in command. Its similar to INSTANCE_NAME variable.
I have been following this guide to deploy Pega 7.4 on Google Cloud compute engine. Everything went smoothly however on the Load Balancer health check the service continues to be unhealthy.
When visiting the external IP a 502 is returned and in trying to troubleshoot GCP told us to "Make sure that your backend is healthy and supports HTTP/2 protocol". Well in the guide this command:
gcloud compute backend-services create pega-app \
--health-checks=pega-health \
--port-name=pega-web \
--session-affinity=GENERATED_COOKIE \
--protocol=HTTP --global
The protocol is HTTP but is this the same as HTTP/2?
What else could be wrong besides checking that the firewall setup allows the health checker and load balancer to pass through (below)?
gcloud compute firewall-rules create pega-internal \
--description="Pega node to node communication requirements" \
--action=ALLOW \
--rules=tcp:9300-9399,tcp:5701-5800 \
--source-tags=pega-app \
--target-tags=pega-app
gcloud compute firewall-rules create pega-web-external \
--description="Pega external web ports" \
--action=ALLOW \
--rules=tcp:8080,tcp:8443 \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--target-tags=pega-app
Edit:
So the Instance group has a named port on 8080
gcloud compute instance-groups managed set-named-ports pega-app \
--named-ports=pega-web:8080 \
--region=${REGION}
And the health check config:
gcloud compute health-checks create http pega-health \
--request-path=/prweb/PRRestService/monitor/pingservice/ping \
--port=8080
I have checked VM Instance logs on the pega-app and getting 404 when trying to hit the ping service.
My problem was that I used a configured using a Static IP address without applying a domain name system record like this: gcloud compute addresses create pega-app --global I skipped this step so it generates ephemeral IP addresses each time the instances have to boot up.