Route traffic between autoscale group and cloud run - google-cloud-platform

I have an autoscale group backend  service under http loadbalancing.
I'm moving this service to cloud run. Now I want to set up weight-based load balancing between these two, with 95% of traffic going to the autoscale group and 5% going to the cloud run.
Tried mutiple things but no luck, also didnt find any official document for the same.
route traffic between autoscale group and cloud run
no option to use different types of endpoints in http load balancer
defaultService: https://www.googleapis.com/compute/v1/projects/XXXXXXXXXX/global/backendServices/backend-instancegroup-bs
fingerprint: Z2O8KiXXOIGDU=XXXXX
hostRules:
- hosts:
- test.com
pathMatcher: path-matcher-1
- hosts:
- '*.test.com'
pathMatcher: path-matcher-2
kind: compute#urlMap
name: backend-dev-external-lb2
pathMatchers:
- defaultService: https://www.googleapis.com/compute/v1/projects/XXXXXXXXXX/global/backendBuckets/backend-bucket-bs
name: path-matcher-1
pathRules:
- paths:
- /login
- /healthcheck
routeAction:
weightedBackendServices:
- backendService: https://www.googleapis.com/compute/v1/projects/XXXXXXXXXX/global/backendServices/backend-instancegroup-bs
weight: 95
- backendService: https://www.googleapis.com/compute/v1/projects/XXXXXXXXXX/global/backendServices/backend-cloudrun-bs
weight: 5
- defaultService: https://www.googleapis.com/compute/v1/projects/XXXXXXXXXX/global/backendBuckets/backend-bucket-bs
name: path-matcher-2
pathRules:
- paths:
- /login
- /healthcheck
routeAction:
weightedBackendServices:
- backendService: https://www.googleapis.com/compute/v1/projects/XXXXXXXXXX/global/backendServices/backend-instancegroup-bs
weight: 95
- backendService: https://www.googleapis.com/compute/v1/projects/XXXXXXXXXX/global/backendServices/backend-cloudrun-bs
weight: 5

I am able to achieve this.
so the backends which I am using are created via GCP UI which has default --load-balancing-scheme=EXTERNAL
but in order to use the advanced routing methods mentioned the google doc requires load-balancing-scheme=EXTERNAL_MANAGED.
after creating new backends with load-balancing-scheme=EXTERNAL_MANAGED it's working
Also, you have to create gcloud compute forwarding-rules with the same load-balancing-scheme
This means you have to create all components in load balancer manually

Related

istio How to configure services that use the root directory to convert to secondary paths

enter image description here
How does my nginx configuration work in the isio? I need to be able to access pgadmin through the secondary path rather than through the root directory. The root directory will be used by other important servers
You would need to create istio gateway and istio virtual service objects. Please refer istio documentation for traffic management. Below is the sample of uri base routing and similarly you can add different routes based on the requirement.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app-route
spec:
hosts:
- app.prod.svc.cluster.local
http:
- match:
- uri:
prefix: /pgadmin
- route:
- destination:
host: <db service name>

Problem creating regional TCP load balancer

I'm trying to create a load-balancer that balances traffic between 3 different AZ's in a given region. If I create a "global" load-balancer with an external IP, everything works fine, but if I am only trying to create a load-balancer that works with a particular subnet -- the health checks consistently fail because they are trying to go to port 80 instead of the port I've specified.
Note the following output of gcloud compute backend-services get-health xx-redacted-central-lb --region=us-central1:
---
backend: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-a/instanceGroups/xx-redacted-central-a
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-a/instances/yb-1-xx-redacted-lb-test-n2
ipAddress: 10.152.0.90
port: 80
kind: compute#backendServiceGroupHealth
---
backend: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-b/instanceGroups/ac-kroger-central-b
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-b/instances/yb-1-xx-redacted-lb-test-n1
ipAddress: 10.152.0.92
port: 80
kind: compute#backendServiceGroupHealth
---
backend: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-c/instanceGroups/xx-redacted-central-c
status:
healthStatus:
- healthState: UNHEALTHY
instance: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-c/instances/yb-1-xx-redacted-lb-test-n3
ipAddress: 10.152.0.4
port: 80
kind: compute#backendServiceGroupHealth
The health-check for this load-balancer was created with the following command:
gcloud compute health-checks create tcp xx-redacted-central-hc4 --port=5433
The backend was created like this:
gcloud compute backend-services create xx-redacted-central-lb --protocol=TCP --health-checks=xx-redacted-central-hc4 --region=us-central1 --load-balancing-scheme=INTERNAL
Full description of the backend:
gcloud compute backend-services describe xx-redacted-central-lb --region=us-central1
backends:
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-a/instanceGroups/xx-redacted-central-a
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-b/instanceGroups/xx-redacted-central-b
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/yugabyte/zones/us-central1-c/instanceGroups/xx-redacted-central-c
connectionDraining:
drainingTimeoutSec: 0
creationTimestamp: '2020-04-01T19:16:44.405-07:00'
description: ''
fingerprint: aOB7iT47XCk=
healthChecks:
- https://www.googleapis.com/compute/v1/projects/yugabyte/global/healthChecks/xx-redacted-central-hc4
id: '1151478560954316259'
kind: compute#backendService
loadBalancingScheme: INTERNAL
name: xx-redacted-central-lb
protocol: TCP
region: https://www.googleapis.com/compute/v1/projects/yugabyte/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/yugabyte/regions/us-central1/backendServices/xx-redacted-central-lb
sessionAffinity: NONE
timeoutSec: 30
If I try to edit the backend and add a port or portname annotation, it fails to save because thinks it is an invalid operation with INTERNAL load-balancers.
Any ideas?
--Alan
As per GCP documentation[1], For health checks to work, you must create an ingress to allow firewall rules for the ip address traffic from Google Cloud probers can connect to your backends.
You can review this documentation[2] to understand the Success criteria for SSL and TCP health check.
[1]Probe IP ranges and firewall rules
https://cloud.google.com/load-balancing/docs/health-check-concepts#ip-ranges
[2]Success Criteria
https://cloud.google.com/load-balancing/docs/health-check-concepts#criteria-protocol-ssl-tcp
Backend services must have an associated Named Port if their backends are instance groups. Named ports are used by load balancing services to direct traffic to specific ports on individual instances. You can assign port name mapping to Instance group, to inform the load balancer to use that port to reach to backend running the service.
Thanks for providing the information. I can successfully reproduce this issue at my end and find it strange that backend health checks are still being pointed to port 80 whereas LB HC configured for a port other than 80. The product engineering team has been made aware of this issue however, I don't have any ETA on the fix and implementation. You may follow thread[1] for further updates.
[1]https://issuetracker.google.com/153600927

How to hide Django Admin from the public on Azure Kubernetes Service while keeping access via backdoor

I'm running a Django app on Azure Kubernetes Service and, for security purposes, would like to do the following:
Completely block off the admin portal from the public (e.g. average Joe cannot reach mysite.com/admin)
Allow access through some backdoor (e.g. a private network, jump host, etc.)
One scenario would be to run two completely separate services: 1) the main API part of the app which is just the primary codebase with the admin disabled. This is served publicly. and 2) Private site behind some firewall which has admin enabled. Each could be on a different cluster with a different FQDN but all connect to the same datastore. This is definitely overkill - there must be a way to keep everything within the cluster.
I'm think there might be a way to configure the Azure networking layer to block/allow traffic from specific IP ranges, and do it on a per-endpoint basis (e.g. mysite.com/admin versus mysite.com/api/1/test). Alternatively, maybe this is doable on a per-subdomain level (e.g. api.mysite.com/anything versus admin.mysite.com/anything).
This might also be doable at the Kubernetes ingress layer but I can't figure out how.
What is the easiest way to satisfy the 2 requirements?
You can manage restriction at ingress level :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/whitelist-source-range: "192.168.0.XXX, 192.175.2.XXX"
name: staging-ingress
namespace: default
spec:
rules:
- host: test.example.io
http:
paths:
- backend:
serviceName: service-name
servicePort: 80
tls:
- hosts:
- test.example.io
secretName: tls-cert
You can white list the IP address for allowing specific path to resolve your backdoor issue. For other you can create another ingress rule with removing annotation for public accesss.
For a particular path :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/whitelist-source-range: "192.168.0.XXX, 192.175.2.XXX"
name: staging-ingress
namespace: default
spec:
rules:
- host: test.example.io
http:
paths:
- path : /admin
backend:
serviceName: service-name
servicePort: 80
tls:
- hosts:
- test.example.io
secretName: tls-cert
test.example.io/admin will only be accessible through source-range.

Configuring Envoy to use SRV records generated by AWS ECS and Route53

I'm using AWS ECS to deploy multiple web services (via Docker images) that are behind an Envoy front proxy. Some of these docker images have multiple deployed instances.
I'm currently using the service discovery features of ECS to generate DNS records so my services are discoverable. All of this works as expected.
I was initially using the awsvpc network mode and was using A records for service discovery. However I soon hit the network limit (started getting 'Not enough ENI' errors) so I've switched to Bridged networking and I'm trying out service discovery using SRV records.
The problem that I've run into is that Envoy proxy doesn't seem to support SRV for service discovery. Or if it does, what changes do I need to make to my setup? I've included the relevant portion of my cluster configuration
clusters:
- name: ms_auth
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: ms_auth.apis
port_value: 80
- name: ms_logging
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: ms_logging.apis
port_value: 80
Failing that, what other options should I consider in getting this setup to work?
Posting the solution I ended up going with.
I setup Consul to work as a discovery service. Basically a Consul sidecar would run alongside every cluster/webservice I have. When the webservice comes online, it would register itself with the Consul server. This way, only the Consul server name would need to be known.
Once a service is registered, you can either query Consul to get the IP for the webservice, or directly access it in the form of <webservice_name>.service.consul
The only change I had to make to the Envoy config was to point at the Consul server IP for DNS resolution (see below).
clusters:
- name: ms_auth
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: ms-auth.service.consul
port_value: 80
dns_resolvers:
- socket_address:
address: {DNS_RESOLVER_IP}
port_value: 8600
- name: ms_logging
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: ms-logging.service.consul
port_value: 80
dns_resolvers:
- socket_address:
address: {DNS_RESOLVER_IP}
port_value: 8600

GCE ingress health checks failing on kubernetes

I am trying to run a bitcoin node on kubernetes. My stateful set is as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: bitcoin-stateful
namespace: dev
spec:
serviceName: bitcoinrpc-dev-service
replicas: 1
selector:
matchLabels:
app: bitcoin-node
template:
metadata:
labels:
app: bitcoin-node
spec:
containers:
- name: bitcoin-node-mainnet
image: myimage:v0.13.2-addrindex
imagePullPolicy: Always
ports:
- containerPort: 8332
volumeMounts:
- name: bitcoin-chaindata
mountPath: /root/.bitcoin
livenessProbe:
exec:
command:
- bitcoin-cli
- getinfo
initialDelaySeconds: 60 #wait this period after staring fist time
periodSeconds: 15 # polling interval
timeoutSeconds: 15 # wish to receive response within this time period
readinessProbe:
exec:
command:
- bitcoin-cli
- getinfo
initialDelaySeconds: 60 #wait this period after staring fist time
periodSeconds: 15 # polling interval
timeoutSeconds: 15 # wish to receive response within this time period
command: ["/bin/bash"]
args: ["-c","service ntp start && \
bitcoind -printtoconsole -conf=/root/.bitcoin/bitcoin.conf -reindex-chainstate -datadir=/root/.bitcoin/ -daemon=0 -bind=0.0.0.0"]
Since, the bitcoin node doesn't serve any http get requests and only can serve post requests, I am trying to use bitcoin-cli command for liveness and readiness probe
My service is as follows:
kind: Service
apiVersion: v1
metadata:
name: bitcoinrpc-dev-service
namespace: dev
spec:
selector:
app: bitcoin-node
ports:
- name: mainnet
protocol: TCP
port: 80
targetPort: 8332
When I describe the pods, they are running ok and all the health checks seem to be ok.
However, I am also using ingress controller with the following config:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: dev-ingress
namespace: dev
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "dev-ingress"
spec:
rules:
- host: bitcoin.something.net
http:
paths:
- path: /rpc
backend:
serviceName: bitcoinrpc-dev-service
servicePort: 80
The health checks on the L7 load balancer seem to be failing. The tests are automatically configured in the following manner.
However, these tests are not the same as the ones configured in the readiness probe. I tried to delete the ingress and recreate however, it still behaves the same way.
I have the following questions:
1. Should I modify/delete this health check manually?
2. Even if the health check is failing (wrongly configured), since the containers and ingress are up, does it mean that I should be able to access the service through http?
What is is missing is that you are performing the liveness and readiness probe as exec command, therefor you need to create a pod that includes an Exec readiness probe and other pod that includes Exec readiness probe as well, Here and Here is described how to do it.
Another thing is to receive traffic through the GCE L7 Loadbalancer Controller you need:
At least 1 Kubernetes NodePort Service (this is the endpoint for your Ingress), so your service is not configured well. therefor you will not be able to able to access the service.
The health check in picture in for the Default backend (where your MIG is using it to check the health of the node) that mean your nodes health check not the container.
No, you don't have to delete the health check as it will get created automatically even if you delete it.
No, you won't be able to access the services until the health checks pass because the traffic in case of gke is passed using NEGs which depend on health checks to know where they can route traffic to.
One possible solution this could be that you need to add a basic http router to your application that returns 200, this can be used health check endpoint.
Other possible options include:
Creating a service of type NodePort and using LoadBalancer to route traffic on the given port to the node pool/instance groups as backend service rather than using NEG
Create the service of type LoadBalancer. This step is the easiest but you need to ensure that the load balancer ip is protected using best security policies like IAP, firewall rules, etc.