I am trying to SSH into my compute engine VM instance on Google Cloud.
I am following the instructions to set up a regional external HTTP(S) load balancer with VM instance group backends
I have created a firewall rule to allow SSH traffic.
gcloud compute firewall-rules describe fw-allow-ssh returns:
allowed:
- IPProtocol: tcp
ports:
- '22'
creationTimestamp: '2022-09-13T07:55:49.187-07:00'
description: ''
direction: INGRESS
disabled: false
id: '3158638846670612250'
kind: compute#firewall
logConfig:
enable: false
name: fw-allow-ssh
network: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/networks/default
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/firewalls/fw-allow-ssh
sourceRanges:
- 0.0.0.0/0
targetTags:
- load-balanced-backend
Apart from that, I have two more firewall rules: fw-allow-health-check and fw-allow-proxies.
gcloud compute firewall-rules describe fw-allow-health-check returns:
allowed:
- IPProtocol: tcp
ports:
- '80'
creationTimestamp: '2022-09-12T21:29:49.688-07:00'
description: ''
direction: INGRESS
disabled: false
id: '2007525931317311954'
kind: compute#firewall
logConfig:
enable: false
name: fw-allow-health-check
network: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/networks/lb-network
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/firewalls/fw-allow-health-check
sourceRanges:
- 130.211.0.0/22
- 35.191.0.0/16
targetTags:
- load-balanced-backend
gcloud compute firewall-rules describe fw-allow-proxies returns:
allowed:
- IPProtocol: tcp
ports:
- '80'
- '443'
- '8080'
creationTimestamp: '2022-09-12T21:33:19.582-07:00'
description: ''
direction: INGRESS
disabled: false
id: '3828652160003716832'
kind: compute#firewall
logConfig:
enable: false
name: fw-allow-proxies
network: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/networks/lb-network
priority: 1000
selfLink: https://www.googleapis.com/compute/v1/projects/possible-post-360304/global/firewalls/fw-allow-proxies
sourceRanges:
- 10.129.0.0/23
targetTags:
- load-balanced-backend
When I try to SSH into my VM instance from the browser, I get the following error:
Cloud IAP for TCP forwarding is not currently supported for google.com projects; attempting to use the legacy relays instead. If you are connecting to a non google.com project, continue reading. Please consider adding a firewall rule to allow ingress from the Cloud IAP for TCP forwarding netblock to the SSH port of your machine to start using Cloud IAP for TCP forwarding for better performance.
and in due course:
We are unable to connect to the VM on port 22.
What am I doing wrong here please. Any guidance would be of great help.
Thank you!
I might not know the context and all you details, but in my personal experience -
If your firewalls are configured correctly - you should be able to make a SSH connection from some host over the 'internet' - i.e. from you local machine. Identity-Aware Proxy is not required at all.
If you would like to make a SSH connection from the UI console (from the SSH 'button' in the browser), you might need to
1/ make sure that the relevant API is enabled and you are ready to pay to such access - see an Identity-Aware Proxy overview and Identity-Aware Proxy (API) in the console.
2/ the firewalls are configured correctly to allow SSH access from the relevant Google's IP range (i.e. 35.235.240.0/20 and those who need such access have relevant IAM roles - see Using IAP for TCP forwarding
3/ check that the VM you would like to connect - has a 'tag' mentioned in the firewall rules (if tags are used).
Related
I'm trying to create a load-balancer that balances traffic between 3 different AZ's in a given region. If I create a "global" load-balancer with an external IP, everything works fine, but if I am only trying to create a load-balancer that works with a particular subnet -- the health checks consistently fail because they are trying to go to port 80 instead of the port I've specified.
Note the following output of gcloud compute backend-services get-health xx-redacted-central-lb --region=us-central1:
---
backend: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-a/instanceGroups/xx-redacted-central-a
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-a/instances/yb-1-xx-redacted-lb-test-n2
ipAddress: 10.152.0.90
port: 80
kind: compute#backendServiceGroupHealth
---
backend: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-b/instanceGroups/ac-kroger-central-b
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-b/instances/yb-1-xx-redacted-lb-test-n1
ipAddress: 10.152.0.92
port: 80
kind: compute#backendServiceGroupHealth
---
backend: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-c/instanceGroups/xx-redacted-central-c
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-c/instances/yb-1-xx-redacted-lb-test-n3
ipAddress: 10.152.0.4
port: 80
kind: compute#backendServiceGroupHealth
The health-check for this load-balancer was created with the following command:
gcloud compute health-checks create tcp xx-redacted-central-hc4 --port=5433
The backend was created like this:
gcloud compute backend-services create xx-redacted-central-lb --protocol=TCP --health-checks=xx-redacted-central-hc4 --region=us-central1 --load-balancing-scheme=INTERNAL
Full description of the backend:
gcloud compute backend-services describe xx-redacted-central-lb --region=us-central1
backends:
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-a/instanceGroups/xx-redacted-central-a
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-b/instanceGroups/xx-redacted-central-b
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-c/instanceGroups/xx-redacted-central-c
connectionDraining:
drainingTimeoutSec: 0
creationTimestamp: '2020-04-01T19:16:44.405-07:00'
description: ''
fingerprint: aOB7iT47XCk=
healthChecks:
- https://www.googleapis.com/compute/v1/projects/yugabyte/global/healthChecks/xx-redacted-central-hc4
id: '1151478560954316259'
kind: compute#backendService
loadBalancingScheme: INTERNAL
name: xx-redacted-central-lb
protocol: TCP
region: https://www.googleapis.com/compute/v1/projects/yugabyte/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/yugabyte/regions/us-central1/backendServices/xx-redacted-central-lb
sessionAffinity: NONE
timeoutSec: 30
If I try to edit the backend and add a port or portname annotation, it fails to save because thinks it is an invalid operation with INTERNAL load-balancers.
Any ideas?
--Alan
As per GCP documentation[1], For health checks to work, you must create an ingress to allow firewall rules for the ip address traffic from Google Cloud probers can connect to your backends.
You can review this documentation[2] to understand the Success criteria for SSL and TCP health check.
[1]Probe IP ranges and firewall rules
https://cloud.google.com/load-balancing/docs/health-check-concepts#ip-ranges
[2]Success Criteria
https://cloud.google.com/load-balancing/docs/health-check-concepts#criteria-protocol-ssl-tcp
Backend services must have an associated Named Port if their backends are instance groups. Named ports are used by load balancing services to direct traffic to specific ports on individual instances. You can assign port name mapping to Instance group, to inform the load balancer to use that port to reach to backend running the service.
Thanks for providing the information. I can successfully reproduce this issue at my end and find it strange that backend health checks are still being pointed to port 80 whereas LB HC configured for a port other than 80. The product engineering team has been made aware of this issue however, I don't have any ETA on the fix and implementation. You may follow thread[1] for further updates.
[1]https://issuetracker.google.com/153600927
I just set up a NodeJS based site on Google Cloud using the Cloud Run service.
There are two DNS records: A (IPv4) and AAAA (IPv6). Whenever I access the site using Chrome, my Chrome picks the IPv6 address and NodeJS app fails hard:
TypeError [ERR_INVALID_URL]: Invalid URL: http://2001:14ba:98ae:1700:****:****:****:****/
at onParseError (internal/url.js:257:9)
at new URL (internal/url.js:333:5)
Note: I censored the address
If I force my browser to use the IPv4 address, then the site works fine.
Is there a way to make the Cloud Run service use IPv4 to the container/app? I don't mind IPv6 at the client <-> Cloud Run level.
My Cloud Run YAML looks like:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: xxx-dev-app-825af7f
namespace: 'xxx'
selfLink: /apis/serving.knative.dev/v1/namespaces/xxx/services/xxx-dev-app-825af7f
uid: 2d787ef2-39a7-xxx-yyy-zzz
resourceVersion: AAWfuzEBUYA
generation: 5
creationTimestamp: '2020-02-26T18:58:40.504717Z'
labels:
cloud.googleapis.com/location: europe-north1
annotations:
run.googleapis.com/client-name: gcloud
serving.knative.dev/creator: pulumi#xxx.iam.gserviceaccount.com
serving.knative.dev/lastModifier: xxx#cloudbuild.gserviceaccount.com
client.knative.dev/user-image: gcr.io/xxx/app:4860b1e137457b0e42a1896d7b95e0348d8cd7e4
run.googleapis.com/client-version: 279.0.0
spec:
traffic:
- percent: 100
latestRevision: true
template:
metadata:
name: xxx-dev-app-825af7f-00005-xoz
annotations:
run.googleapis.com/client-name: gcloud
client.knative.dev/user-image: gcr.io/xxx/app:4860b1e137457b0e42a1896d7b95e0348d8cd7e4
run.googleapis.com/client-version: 279.0.0
autoscaling.knative.dev/maxScale: '1000'
spec:
timeoutSeconds: 900
containerConcurrency: 80
containers:
- image: gcr.io/xxx/app:4860b1e137457b0e42a1896d7b95e0348d8cd7e4
ports:
- containerPort: 8080
resources:
limits:
cpu: 1000m
memory: 256Mi
requests:
cpu: 200m
memory: 64Mi
status:
conditions:
- type: Ready
status: 'True'
lastTransitionTime: '2020-02-29T18:33:33.424Z'
- type: ConfigurationsReady
status: 'True'
lastTransitionTime: '2020-02-29T18:33:28.264Z'
- type: RoutesReady
status: 'True'
lastTransitionTime: '2020-02-29T18:33:33.424Z'
observedGeneration: 5
traffic:
- revisionName: xxx-dev-app-825af7f-00005-xoz
percent: 100
latestRevision: true
latestReadyRevisionName: xxx-dev-app-825af7f-00005-xoz
latestCreatedRevisionName: xxx-dev-app-825af7f-00005-xoz
address:
url: https://xxx.run.app
url: https://xxx.run.app
AFAIK, IPv6 is only supported at Global Load balancer only. This load balancer proxied the connection and convert it to IPv4 for internal access into Google Network.Thereby, direct access to Cloud Run with IPv6 seems impossible.
However, things are in progress, especially around Load Balancing and it could solve your issue. Maybe announcements at Cloud Next in April. Stay tuned!
For the connections between Cloud Run <=> user browser: You currently cannot disable the IPv6 stack.
(As Guillaume said, upcoming support for configurable Cloud HTTPS Load Balancer would solve your problem –in fact, IPv4 is the default for GCLB, and you explicitly need to configure an IPv6 address if you want IPv6 for your GCLB).
For connections between Cloud Run Service <=> Cloud Run Service: You should be fully control what IP you connect to, on the client side.
For example, on the client side,
Force Python HTTP client to use IPv4
Force Go HTTP client to use IPv4
You can force programs to use IPv4 using their options e.g. curl --ipv4.
There are 4 "default" firewall rules defined.
I want to disable particular one default-allow-ssh for only specific host.
For some reason I don't see tag default-allow-ssh in gcloud compute instances describe $VM:
tags:
fingerprint: ioTF8nBLmIk=
items:
- allow-tcp-443
- allow-tcp-80
I checked rule definition:
gcloud compute firewall-rules describe default-allow-ssh
allowed:
- IPProtocol: tcp
ports:
- '22'
description: Allow SSH from anywhere
direction: INGRESS
disabled: false
kind: compute#firewall
name: default-allow-ssh
network: https://www.googleapis.com/compute/v1/projects/.../global/networks/default
priority: 65534
selfLink: https://www.googleapis.com/compute/v1/projects/.../global/firewalls/default-allow-ssh
sourceRanges:
- 0.0.0.0/0
I see no targetTags or sourceTags in definition. Does that mean that rule is applied to entire project and can't be disabled per host?
I see no targetTags or sourceTags in definition. Does that mean that
rule is applied to entire project and can't be disabled per host?
yes exactly, you can find more about the default firewall rules here
It's best practice to make this rule less permissive by the use of tags or source ips, however you could also make another rule that denies ssh traffic to that specific vms using a tag, maybe allowing ssh only from a bastion host.
I’m trying to configure SSL for an AWS Load Balancer for my AWS EKS cluster. The load balancer is proxying to a Traefik instance running on my cluster. This works fine over HTTP.
Then I created my AWS Certificate in the Cert Manager, copied the ARN and followed this part of the documentation: Services - Kubernetes
But the certificate is not linked to the Listeners in the AWS Load Balancer. I can’t find further documentations or a working example on the web. Can anyone point me out to one?
The LoadBalancer configuration looks like this:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"traefik-ingress-service","namespace":"kube-system"},"spec":{"ports":[{"name":"web","port":80,"targetPort":80},{"name":"admin","port":8080,"targetPort":8080},{"name":"secure","port":443,"targetPort":443}],"selector":{"k8s-app":"traefik-ingress-lb"},"type":"LoadBalancer"}}
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-north-1:000000000:certificate/e386a77d-26d9-4608-826b-b2b3a5d1ec47
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
creationTimestamp: 2019-01-14T14:33:17Z
name: traefik-ingress-service
namespace: kube-system
resourceVersion: "10172130"
selfLink: /api/v1/namespaces/kube-system/services/traefik-ingress-service
uid: e386a77d-26d9-4608-826b-b2b3a5d1ec47
spec:
clusterIP: 10.100.115.166
externalTrafficPolicy: Cluster
ports:
- name: web
port: 80
protocol: TCP
targetPort: 80
- name: admin
port: 8080
protocol: TCP
targetPort: 8080
- name: secure
port: 443
protocol: TCP
targetPort: 80
selector:
k8s-app: traefik-ingress-lb
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: e386a77d-26d9-4608-826b-b2b3a5d1ec47.eu-north-1.elb.amazonaws.com
Kind Regards and looking forward to your answers.
I had a similar issue since I'm using EKS v1.14 (and nginx-ingress-controller) and a Network Load Balancer, and according to Kubernetes, it's possible since Kubernetes v1.15 - GitHub Issue. And since 10-March-2020 - Amazon EKS now supports Kubernetes version 1.15
So if it's still relevant, read more about it here - How do I terminate HTTPS traffic on Amazon EKS workloads with ACM?.
I ran into the same problem and discovered that the issue was that the certificate type that I chose (ECDSA 384-bit) wasn't compatible with the Classic Load Balancer (but was supported by the new Application Load Balancer). When I switched to an RSA certificate it worked correctly.
I have an app running on port 31280 exposed via nodePorts from a Kubernetes cluster. Same port is exposed through named-port on the instance group used by cluster for load balancing. While creating a backend-service with HTTP protocol, the service is created at default http port(80) even if I specify custom named-port.
Exposed named-port for instance group is:
gcloud preview instance-groups --zone='asia-east1-a' list-services gke-dropwizard-service-31ccc162-group
[
{
"endpoints": [
{
"name": "dropwizard-example-service-http",
"port": 31280
}
],
"fingerprint": "XXXXXXXXXXXXXXXX"
}
]
Health check is:
gcloud compute http-health-checks describe dropwizard-example-service
checkIntervalSec: 5
creationTimestamp: '2015-08-11T12:08:16.245-07:00'
description: Dropwizard Example Sevice health check ping
healthyThreshold: 2
host: ''
id: 'XXXXXXX'
kind: compute#httpHealthCheck
name: dropwizard-example-service
port: 31318
requestPath: /ping
selfLink: https://www.googleapis.com/compute/v1/projects/XXX/global/httpHealthChecks/dropwizard-example-service
timeoutSec: 3
unhealthyThreshold: 2
Health port(31318) is also exposed via named port in instance group.
Commands used to create backend-service is:
gcloud compute backend-services create "dropwizard-example-external-service" --description "Dropwizard Example Service via Nodeports from Kubernetes cluster" --http-health-check "dropwizard-example-service" --port-name "dropwizard-example-service-http" --timeout "30"
Command used to add instance group to backend-service is:
gcloud compute backend-services add-backend "dropwizard-example-external-service" --group "gke-dropwizard-service-31ccc162-group" --zone "asia-east1-a" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "0.8"
Finally created backend-service is described as:
gcloud compute backend-services describe dropwizard-example-external-service
backends:
- balancingMode: UTILIZATION
capacityScaler: 1.0
description: ''
group: https://www.googleapis.com/resourceviews/v1beta2/projects/XXX/zones/asia-east1-a/resourceViews/gke-dropwizard-service-31ccc162-group
maxUtilization: 0.8
creationTimestamp: '2015-08-11T13:10:46.608-07:00'
description: Dropwizard Example Service via Nodeport from Kubernetes cluster
fingerprint: XXXXXXXXXXXX
healthChecks:
- https://www.googleapis.com/compute/v1/projects/XXX/global/httpHealthChecks/dropwizard-example-service
id: 'XXXX'
kind: compute#backendService
name: dropwizard-example-external-service
port: 80
portName: dropwizard-example-service-http
protocol: HTTP
selfLink: https://www.googleapis.com/compute/v1/projects/XXXX/global/backendServices/dropwizard-example-external-service
timeoutSec: 30
I don't understand which part is wrong. Why backend-service is using port 80?
EDIT: I was wrong. It does seem to work. I had a typo in my script.
My script is here - I literally just ran this and it worked properly.
https://gist.github.com/thockin/36fea15cc0deb08a768a
Original response for posterity:
I'm not an expert in the GCE L7 API yet, but I have made it work in Kubernetes. I think there's a bug in the --port-name logic. If you specify --port directly it seems to work for me. I'm filing an issue internally.