I have written a small script which creates VPC, firewall-rule, and instance. I gave parameters to script. but instead of taking parameter for firewall-rule it takes instancename2 value in firewall name field which.
ZONE=$2
MACHINE_TYPE=$3
IMAGE_FAMILY=$4
IMAGE_PROJECT=$5
BOOT_DISK_SIZE=$6
BOOT_DISK_TYPE=$7
NETWORK_NAME=$8
FIREWALL_RULE=$9
FIREWALL_NAME=$10
TAGS=$11
gcloud compute networks create $NETWORK_NAME --subnet-mode=auto
gcloud compute firewall-rules create $FIREWALL_NAME --network=$NETWORK_NAME --allow=$FIREWALL_RULE --source-tags=$TAGS
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--machine-type=$MACHINE_TYPE \
--image-family=$IMAGE_FAMILY \
--image-project=$IMAGE_PROJECT \
--boot-disk-size=$BOOT_DISK_SIZE \
--boot-disk-type=$BOOT_DISK_TYPE \
--network-interface network=$NETWORK_NAME,no-address \
--tags=$TAGS \
command : bash network.sh myvm us-west1-a f1-micro ubuntu-1810 ubuntu-os-cloud 10 pd-ssd mynetwork tcp:80 myrule mytag
output :
Created .
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
mynetwork AUTO REGIONAL
Instances on this network will not be reachable until firewall rules
are created. As an example, you can allow all internal traffic between
instances as well as SSH, RDP, and ICMP by running:
$ gcloud compute firewall-rules create <FIREWALL_NAME> --network mynetwork --allow tcp,udp,icmp --source-ranges <IP_RANGE>
$ gcloud compute firewall-rules create <FIREWALL_NAME> --network mynetwork --allow tcp:22,tcp:3389,icmp
Creating firewall...⠛Created
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
myvm0 mynetwork INGRESS 1000 tcp:80 False
Created.
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
myvm us-west1-a f1-micro 10.138.0.2 RUNNING
please check the name of firewall created (below 'creating firewall...done.'). It's not what i provided in command. Its similar to INSTANCE_NAME variable.
Related
I deployed eks-cluster with two nodes in the same subnet.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-xx-xx.xx-xx-xx.compute.internal Ready <none> 6h31m v1.22.9-eks-xxxx
ip-172-31-xx-xx.xx-xxx-x.compute.internal Ready <none> 6h31m v1.22.9-eks-xxxx
Everything worked fine. I wanted to configure a NAT-gateway for the subnet in which nodes are present.
Once the NAT-gateway is configured all of a sudden all the nodes went to NotReady state.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-xx-xx.xx-xx-xx.compute.internal NotReady <none> 6h45m v1.22.9-eks-xxxx
ip-172-31-xx-xx.xx-xxx-x.compute.internal NotReady <none> 6h45m v1.22.9-eks-xxxx
kubectl get events also show that the nodes are NotReady. I am not able to exec into the pod as well.
when i try kubectl exec i get error: unable to upgrade connection: Unauthorized.
Upon removing my subnet from the associate-subnets in route-table(as part of creating the nat-gateway) everything worked fine and nodes went into ready state.
Any idea as to how to create NAT gateway for eks worker nodes? Is there anything i am missing
Thanks in advance
I used eksctl to deploy the cluster using the following command
eksctl create cluster
--name test-cluster \
--version 1.22 \
--nodegroup-name test-kube-workers \
--node-type t3.medium \
--nodes 2 \
--nodes-min 1 \
--nodes-max 2 \
--node-private-networking \
--ssh-access
and everything has been taken care off
I have created a private cluster in gke with the follwoing
gcloud container clusters create private-cluster-0 \
--create-subnetwork name=my-subnet-0 \
--enable-master-authorized-networks \
--enable-ip-alias \
--enable-private-nodes \
--enable-private-endpoint \
--master-ipv4-cidr 172.16.0.32/28 \
--zone us-central1-a
Then I did
gcloud container clusters get-credentials --zone us-central1-a private-cluster-0
I was trying to install a helm chart from my local machine but I got the following error:
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "https://172.16.0.34/version?timeout=32s": dial tcp 172.16.0.34:443: i/o timeout
Can anyone please tell me how to resolve this error.
How to deploy an helm chart from a local machine to a private cluster in gke?
You created a private cluster and trying to install helm from local machine.
This won't work because 172.16.0.0/12 range is non-routable, your PC is looking for the cluster in your own LAN.
You can find information on accessing private GKE clusters on google docs.
There are also more general tutorials on installing helm on GKE from google and medium.
First you need basic connectivity to access your private cluster.
For example, SSH to a VM on a subnet that master-ipv4-cidr allows.
I had this basic connectivity but was still unable to install a helm chart as the install couldn't access services within the cluster.
I could only see this issue after adding verbosity to helm install and logging the output.
helm install -v10 my-chart >log.txt 2>&1
With the get-credentials command
gcloud container clusters get-credentials --zone us-central1-a private-cluster-0
Try adding the argument --internal-ip
This controls whether to use the internal IP address of the cluster endpoint. It made the difference for me.
I have two machines in the same VPC (under same subnet range) in GCP. I want to ping MAC address from one instance to another (ie. layer 2 connection). Is this supported in GCP?
If not, is GRE tunnel supported between the two VMs in the above configuration or any other tunneling?
My mail goal is to establish a layer 2 connection.
Andromeda (Google's Network) is a Software Defined Networking (SDN). Andromeda's goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization.
Hence, Andromeda itself is not a Cloud Platform networking product; rather, it is the basis for delivering Cloud Platform networking services with high performance, availability, isolation, and security. For example, Cloud Platform firewalls, routing, and forwarding rules all leverage the underlying internal Andromeda APIs and infrastructure.
Also, By default, the instances are configured with a 255.255.255.255 mask (to prevent instance ARP table exhaustion), and when a new connection is initiated, the packet will be sent to the subnet’s gateway MAC address, regardless if the destination IP is outside or within the subnet range. Thus, the instance might need to make an ARP request to resolve the gateway’s MAC address first.
Unfortunately Google doesn't allow GRE traffic[1].
So, my recommendation is to run some test like iperf or MTR between them in order to validate layer 2.
You can not have L2 connectivity this out of the box. However, you can setup a VXLAN or other kind of tunnels between VMs if you really need L2 connectivity for some odd reason. I've written a blog about how to do this: https://samos-it.com/posts/gce-vm-vxlan-l2-connectivity.html (Copy pasting the main pieces below)
Create the VMs
In this section you will create 2 Ubuntu 20.04 VMs
Let's start by creating vm-1
gcloud compute instances create vm-1 \
--image-family=ubuntu-2004-lts --image-project=ubuntu-os-cloud \
--zone=us-central1-a \
--boot-disk-size 20G \
--boot-disk-type pd-ssd \
--can-ip-forward \
--network default \
--machine-type n1-standard-2
Repeat the same command creating vm-2 this time:
gcloud compute instances create vm-2 \
--image-family=ubuntu-2004-lts --image-project=ubuntu-os-cloud \
--zone=us-central1-a \
--boot-disk-size 20G \
--boot-disk-type pd-ssd \
--can-ip-forward \
--network default \
--machine-type n1-standard-2
Verify that SSH to both VMs is available and up. You might need o be patient.
gcloud compute ssh root#vm-1 --zone us-central1-a --command "echo 'SSH to vm-1 succeeded'"
gcloud compute ssh root#vm-2 --zone us-central1-a --command "echo 'SSH to vm-2 succeeded'"
Setup VXLAN mesh between the VMs
In this section, you will be creating the VXLAN mesh between vm-1 and vm-2 that you just created.
Create bash variables that will be used for setting up the VXLAN mesh
VM1_VPC_IP=$(gcloud compute instances describe vm-1 \
--format='get(networkInterfaces[0].networkIP)')
VM2_VPC_IP=$(gcloud compute instances describe vm-2 \
--format='get(networkInterfaces[0].networkIP)')
echo $VM1_VPC_IP
echo $VM2_VPC_IP
Create the VXLAN device and mesh on vm-1
gcloud compute ssh root#vm-1 --zone us-central1-a << EOF
set -x
ip link add vxlan0 type vxlan id 42 dev ens4 dstport 0
bridge fdb append to 00:00:00:00:00:00 dst $VM2_VPC_IP dev vxlan0
ip addr add 10.200.0.2/24 dev vxlan0
ip link set up dev vxlan0
EOF
Create the VXLAN device and mesh on vm-2
gcloud compute ssh root#vm-2 --zone us-central1-a << EOF
set -x
ip link add vxlan0 type vxlan id 42 dev ens4 dstport 0
bridge fdb append to 00:00:00:00:00:00 dst $VM1_VPC_IP dev vxlan0
ip addr add 10.200.0.3/24 dev vxlan0
ip link set up dev vxlan0
EOF
Start a tcpdump on vm-1
gcloud compute ssh root#vm-1 --zone us-central1-a
tcpdump -i vxlan0 -n
In another session ping vm-2 from vm-1 and take a look at tcpdump output. Notice the arp.
gcloud compute ssh root#vm-1 --zone us-central1-a
ping 10.200.0.3
I have been following this guide to deploy Pega 7.4 on Google Cloud compute engine. Everything went smoothly however on the Load Balancer health check the service continues to be unhealthy.
When visiting the external IP a 502 is returned and in trying to troubleshoot GCP told us to "Make sure that your backend is healthy and supports HTTP/2 protocol". Well in the guide this command:
gcloud compute backend-services create pega-app \
--health-checks=pega-health \
--port-name=pega-web \
--session-affinity=GENERATED_COOKIE \
--protocol=HTTP --global
The protocol is HTTP but is this the same as HTTP/2?
What else could be wrong besides checking that the firewall setup allows the health checker and load balancer to pass through (below)?
gcloud compute firewall-rules create pega-internal \
--description="Pega node to node communication requirements" \
--action=ALLOW \
--rules=tcp:9300-9399,tcp:5701-5800 \
--source-tags=pega-app \
--target-tags=pega-app
gcloud compute firewall-rules create pega-web-external \
--description="Pega external web ports" \
--action=ALLOW \
--rules=tcp:8080,tcp:8443 \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--target-tags=pega-app
Edit:
So the Instance group has a named port on 8080
gcloud compute instance-groups managed set-named-ports pega-app \
--named-ports=pega-web:8080 \
--region=${REGION}
And the health check config:
gcloud compute health-checks create http pega-health \
--request-path=/prweb/PRRestService/monitor/pingservice/ping \
--port=8080
I have checked VM Instance logs on the pega-app and getting 404 when trying to hit the ping service.
My problem was that I used a configured using a Static IP address without applying a domain name system record like this: gcloud compute addresses create pega-app --global I skipped this step so it generates ephemeral IP addresses each time the instances have to boot up.
I want to configure Google Could Load balancing so that:
All edge requests to port 443 terminate SSL at the load balancer and route to port 8080 in a managed instance group
All edge requests to port 80 route to port 8081 in a managed instance group which will then send a 307 response to the HTTPS service forcing SSL.
I have:
Global forwading rules
A global forwarding rule STATIC_IP:80 -> httpsreditect-target-proxy
A global forwarding rule STATIC_IP:443 -> webapp-target-proxy
Target proxies
httpsreditect-target-proxy -> httpredirect_urlmap
webapp-target-proxy -> webapp_urlmap
URL Maps
httpredirect_urlmap -> redirect_backend (8081 in the instance pool)
webapp_urlmap -> webapp_backend (8080 in the instance pool)
This does not work.
With this setup if I set the redirect_backend port to 8081 the webapp_backend port is also changed to 8081. Likewise if I set the webapp_backend port to 8080 then the redirect_backend port is set to 8080.
Is it possible to route traffic based on port to different backends? The option is their in the GUI, no validation errors, it feels like it should be possible but when a backend port is set all backends are then set to the same port?
I know putting HAProxy on the node is a solution and reverse proxy the microservices there but I'd rather have the Google Cloud Application Loadbalancer terminate SSL as using f1-micro instances.
The key to do this is an easily missed snippet at https://cloud.google.com/compute/docs/load-balancing/http/backend-service#restrictions_and_guidance.
Your configuration will be simpler if you do not add the same instance group to two different backends. If you do add the same instance group to two backends:
...
If your instance group serves two or more ports for several backends respectively, you have to specify different port names in the instance group.
The initial setup is non trivial so below is a reference.
Based on the example in my config:
A managed instance group
Main webapp running on port 80
HTTP redirect service running on port 8081
Firewall
Ensure you have firewall rules allowing a healthcheck access to your service from Google:
gcloud compute firewall-rules create allow-http-from-lb \
--description "Incoming http allowed from cloud loadbalancer." \
--allow tcp:80
--source-ranges "130.211.0.0/22"
gcloud compute firewall-rules create allow-http-redirect-from-lb \
--description "Incoming http redirect service allowed from cloud loadbalancer." \
--allow tcp:8081
--source-ranges "130.211.0.0/22"
Healthcheck
Ensure you have healthchecks setup for the two services checking on the correct internal ports.
gcloud compute http-health-checks create webapp-healthcheck \
--description "Main webapp healthcheck" \
--port 80 \
--request-path "/healthcheck"
gcloud compute http-health-checks create httpsredirect-service-healthcheck \
--description "HTTP redirect service healthcheck" \
--port 8081 \
--request-path "/healthcheck"
Configure named ports
The looks to be the key if your instance group has several microservices running on different ports that you want to expose under a common loadbalancer.
Replace INSTANCE_GROUP_NAME, REGION and the named-ports with correct values for your services.
gcloud compute instance-groups set-named-ports INSTANCE_GROUP_NAME \
--region=REGION \
--named-ports "webapp:80,httpsredirectservice:8081"
Create loadbalancer backends
Ensure the --port-name matches the correct named port from the previous step.
gcloud compute backend-services create webapp-lb-backend \
--http-health-check webapp-healthcheck \
--protocol http \
--description "Webapp load balancer backend" \
--port-name webapp
gcloud compute backend-services create httpsredirect-lb-backend \
--http-health-check webapp-healthcheck \
--protocol http \
--description "HTTP -> HTTPS redirect service load balancer backend" \
--port-name httpsredirectservice
Create URL Maps for the two services
Ensure --default-service uses the configured values from the previous step.
gcloud compute url-maps create webapp-urlmap \
--default-service webapp-lb-backend
gcloud compute url-maps create httpsredirect-urlmap \
--default-service httpsredirect-lb-backend
Create Target Proxies
Target proxies are referenced by one or more global forwarding rules and route the incoming HTTP or HTTPS requests to a URL map.
We create a https target proxy for the webapp to terminate SSL at the load balancer.
gcloud compute target-https-proxies create webapp-target-proxy \
--url-map webapp-urlmap \
--ssl-certificate [SSL_CERTIFICATES]
The redirect service:
gcloud compute target-http-proxies create httpsredirect-target-proxy \
--url-map httpsredirect-urlmap
Global forwarding rules
The final step is to create the global forwarding rules
gcloud compute forwarding-rules create webapp-forwarding-rule
--global \
--address LB_STATIC_IP \
--port-range 443 \
--target-https-proxy webapp-target-proxy
gcloud compute forwarding-rules create httpsredirect-forwarding-rule
--global \
--address LB_STATIC_IP \
--port-range 80 \
--target-http-proxy httpsredirect-target-proxy
Issues I hit
Ensure the firewall is configured correctly to allow healthchecks and healthchecks are setup.
If get intermitent 502 errors check the loadbalencers in the cloud console report healthy instances
Other notes
Because two url maps are needed you are charged for two load balancers looking at my billing info. Port 80 and port 443 both use their own load balancer
It doesn't look like it's possible to use a network loadbalancer bother terminating SSL and serving HTTP as can be done on AWS