Disable Istio default retry on errorcode 503 - istio

I am not sure, whether we are talking about the smae issue. I will try to explain the scenario which I am trying. I have 2 services appservice-A and appservice-B, both are in the same namespace "mynamespace" and both have seperate Virtual service called VS-A and VS-B.
In MyAPPLICATION, there is a call from appservice-A to appservice-B , and I want to disable the retry when the ServiceB throwing a 503 error, and the ServiceA should not retry again. So what I enabled is in the VS-B, I added the retry-attempt to 0, by expecting that appservice-A wont retry an attempt if the appservice-B throws a 503 error.. Which is not working for me.
I performed a testing again with the sample for you by making the attempt 0 in the VS-B. but still there is no change is happening to the retries, still showing 2.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
labels:
app.kubernetes.io/managed-by: Helm
name: appservice-B
namespace: mynamespace
spec:
gateways:
- istio-system/ingress-gateway
hosts:
- appservice-B.mynamespace.svc.cluster.local
- appservice-B.mycompany.com
http:
- match:
- uri:
prefix: /
retries:
attempts: 0
route:
- destination:
host: appservice-B.mynamespace.svc.cluster.local
port:
number: 8080
subset: v1
this is the proxy config routing rule out put which generated from appservice-A
istioctl proxy-config route appservice-A-6dbb74bc88-dffb8 -n mynamespace -o json
"routes": [
{
"name": "default",
"match": {
"prefix": "/"
},
"route": {
"cluster": "outbound|8080||appservice-B.mynamespace.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxStreamDuration": {
"maxStreamDuration": "0s",
"grpcTimeoutHeaderMax": "0s"
}
},
"decorator": {
"operation": "appservice-B.mynamespace.svc.cluster.local:8080/*"
}
}
],
"includeRequestAttemptCount": true
So, suspect that , whether I am trying right things for my scenario, If this also right, any workaround like Is there achange the property of status codes or headers from responses to disable, so that istio wont retry appservice-B from appservice-A if a 503 errorcode got from appservice-B ?

Related

How do I check packets coming to AWS Loadbalancer and Istio gateway with PacketBeat?

I am configuring Kubernetes based on aws ec2.
I use elasticsearch's packetbeat to get the geometric of clients accessing the service.
Istio is used as the service mesh of Kubernetes, and CLB is used for the load balancer.
I want to know the client ip accessing the service and the domain address the client accesses here.
my packetbeat.yml
setup.dashboards.enabled: true
setup.template.enabled: true
setup.template.settings:
index.number_of_shards: 2
packetbeat.interfaces.device: eth0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.auto_promices_mode: true
packetbeat.interfaces.with_vlans: true
packetbeat.protocols:
- type: dhcpv4
ports: [67, 68]
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80,5601,8081,8002,5000, 8000, 8080, 9200]
send_request: true
send_response: true
send_header: ["User-Agent", "Cookie", "Set-Cookie"]
real_ip_header: "X-Forwarded-For"
- type: mysql
ports: [3306, 3307]
- type: memcache
ports: [11211]
- type: redis
ports: [6379]
- type: pgsql
ports: [5432]
- type: thrift
ports: [9090]
- type: mongodb
ports: [27017]
- type: cassandra
ports: [9042]
- type: tls
ports: [443, 993, 995, 5223, 8443, 8883,8883, 9243, 15021, 15443, 32440]
send_request: true
send_response: true
send_all_headers: true
include_body_for: ["text/html", "application/json"]
packetbeat.procs.enabled: true
packetbeat.flows:
timeout: 30s
period: 10s
fields: ["server.domain"]
processors:
- include_fields:
fields:
- source.ip
- server.domain
- add_dokcer_metadata:
- add_host_metadata:
- add_cloud_metadata:
- add_kubernetes_metadata:
host: ${HOSTNAME}
indexers:
- ip_port:
matchers:
- field_format:
format: '%{[ip]}:%{[port]}'
# with version 7 of Packetbeat use the following line instead of the one above.
#format: '%{[destination.ip]}:%{[destination.port]}'
output.elasticsearch:
hosts: ${ELASTICSEARCH_ADDRESS}
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
pipeline: geoip-info
setup.kibana:
host: 'https://myhost:443'
my CLB listener
CLB has enabled proxy protocol.
But the packet beat doesn't bring me the data I want.
search for tls log
"client": {
"port": 1196,
"ip": "10.0.0.83"
},
"network": {
"community_id": "1:+ARNMwsOGxkBkrmWfCVawtA1GKo=",
"protocol": "tls",
"transport": "tcp",
"type": "ipv4",
"direction": "egress"
},
"destination": {
"port": 8443,
"ip": "10.0.1.77",
"domain": "my host domain"
},
search for flow.final: true
"event": {
"duration": 1051434189423,
"kind": "event",
"start": "2022-10-28T05:25:14.171Z",
"action": "network_flow",
"end": "2022-10-28T05:42:45.605Z",
"category": [
"network_traffic",
"network"
],
"type": [
"connection"
],
"dataset": "flow"
},
"source": {
"geo": {
"continent_name": "Asia",
"region_iso_code": "KR",
"city_name": "Has-si",
"country_iso_code": "KR",
"country_name": "South Korea",
"region_name": "Gg",
"location": {
"lon": 126.8168,
"lat": 37.2072
}
},
"port": 50305,
"bytes": 24174,
"ip": "my real ip address",
"packets": 166
},
I can find out if I search separately, but there are no two points of contact.
I would like to see the log of the above two combined.
The domain the client accesses + real client ip.
please help me..

client disconnected before any response GCLB

We deployed our site in front GCLB.
LB -> Cloud run -> APP ENGINE API
Cloud run is hosting a react site and App Engine golang API.
After 12 hours we started to saw decline in the amount of clicks via google analytics but traffic was pretty much the same.
Our assumption is that "lost" traffic somehow, I can see in logs 2 main issue.
404 with address of old site components.
client disconnected before any response error.
I can understand the 404 error its cache request that looking for old site components.
But i don`t understand client disconnected error and if its related to our "lost" traffic.
Any suggestion how to analyze our "lost" traffic?
UPDATE:
I found some correlation to the client client disconnected error.
The requestUrl contains images resources for exemple
images/zoom.png?v1.0
Back end service name is empty backend_service_name: ""
not sure how it can be empty, I mapped all the resources and host
LOG
{
"insertId": "cs2fmdg2eo8nba",
"jsonPayload": {
"cacheId": "FRA-1209ea83",
"#type": "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry",
"statusDetails": "client_disconnected_before_any_response"
},
"httpRequest": {
"requestMethod": "GET",
"requestUrl": "https://travelpricedrops.com/images/aero.png?v1.0",
"requestSize": "78",
"userAgent": "Mozilla/5.0 (iPhone; CPU iPhone OS 14_8 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Mobile/15E148 Safari/604.1",
"remoteIp": "109.104.52.1",
"referer": "https://travelpricedrops.com/passthru?tab=front&vert=flights&origin-iata=LEJ&destination-iata=JFK&departure-time=2021-12-26T11%3A00%3A00Z&cabin-class=economy&num-adults=1&num-youth=0&rental-duration=6&dta=48&return-time=2022-01-01T11%3A00%3A00Z&f=cf&fuid=1102&b=k&buid=1043",
"cacheLookup": true,
"latency": "0.071958s"
},
"resource": {
"type": "http_load_balancer",
"labels": {
"zone": "global",
"backend_service_name": "",
"forwarding_rule_name": "tpd-int-https-ipv4",
"target_proxy_name": "int-tpd-target-proxy-2",
"url_map_name": "int-tpd",
"project_id": "tpdrops"
}
},
"timestamp": "2021-11-09T06:13:55.121455Z",
"severity": "INFO",
"logName": "projects/tpdrops/logs/requests",
"trace": "projects/tpdrops/traces/13821ba38ae9e3191381f3f64b0a7b1a",
"receiveTimestamp": "2021-11-09T06:13:55.343086132Z",
"spanId": "a5ae86336a24bc32"
}
Config
**gcloud compute forwarding-rules describe tpd-int-https-ipv4**
IPAddress: 34.149.93.11
IPProtocol: TCP
creationTimestamp: '2021-08-30T11:49:06.047-07:00'
description: ''
fingerprint: CIAg3TcEb9Y=
id: '1815919129513727693'
kind: compute#forwardingRule
labelFingerprint: 42WmSpB8rSM=
loadBalancingScheme: EXTERNAL
name: tpd-int-https-ipv4
networkTier: PREMIUM
portRange: 443-443
selfLink: https://www.googleapis.com/compute/v1/projects/tpdrops/global/forwardingRules/tpd-int-https-ipv4
target: https://www.googleapis.com/compute/v1/projects/tpdrops/global/targetHttpsProxies/int-tpd-target-proxy-2
**gcloud compute backend-services describe tpd-prod-back**
affinityCookieTtlSec: 0
backends:
- balancingMode: UTILIZATION
capacityScaler: 0.0
group: https://www.googleapis.com/compute/v1/projects/tpdrops/regions/us-central1/networkEndpointGroups/tpd-front
cdnPolicy:
cacheKeyPolicy:
includeHost: true
includeProtocol: true
includeQueryString: true
cacheMode: CACHE_ALL_STATIC
clientTtl: 3600
defaultTtl: 3600
maxTtl: 86400
negativeCaching: false
requestCoalescing: true
serveWhileStale: 86400
signedUrlCacheMaxAgeSec: '0'
connectionDraining:
drainingTimeoutSec: 0
creationTimestamp: '2021-10-25T04:09:29.908-07:00'
description: ''
enableCDN: true
fingerprint: 5FNZk6GXJTw=
iap:
enabled: false
id: '6357784085114072710'
kind: compute#backendService
loadBalancingScheme: EXTERNAL
logConfig:
enable: true
sampleRate: 1.0
name: tpd-prod-back
port: 80
portName: http
protocol: HTTP
selfLink: https://www.googleapis.com/compute/v1/projects/tpdrops/global/backendServices/tpd-prod-back
sessionAffinity: NONE
timeoutSec: 30
**gcloud compute url-maps describe int-tpd**
creationTimestamp: '2021-08-29T06:08:35.918-07:00'
defaultService: https://www.googleapis.com/compute/v1/projects/tpdrops/global/backendServices/tpd-prod-back
fingerprint: trtG9xBMlvE=
hostRules:
- hosts:
- acpt.travelpricedrops.com
pathMatcher: path-matcher-2
- hosts:
- int.travelpricedrops.com
pathMatcher: path-matcher-1
- hosts:
- api.acpt.travelpricedrops.com
pathMatcher: path-matcher-3
- hosts:
- api.int.travelpricedrops.com
pathMatcher: path-matcher-4
- hosts:
- api.travelpricedrops.com
pathMatcher: path-matcher-5
- hosts:
- travelpricedrops.com
pathMatcher: path-matcher-6
id: '6018005644614187068'
kind: compute#urlMap
name: int-tpd
pathMatchers:
- defaultService: https://www.googleapis.com/compute/v1/projects/tpdrops/global/backendServices/tpd-acpt-back
name: path-matcher-2
- defaultService: https://www.googleapis.com/compute/v1/projects/tpdrops/global/backendServices/tpd-int-http
name: path-matcher-1
- defaultService: https://www.googleapis.com/compute/v1/projects/tpdrops/global/backendServices/tpd-api-acpt
name: path-matcher-3
- defaultService: https://www.googleapis.com/compute/v1/projects/tpdrops/global/backendServices/tpd-api-int
name: path-matcher-4
- defaultService: https://www.googleapis.com/compute/v1/projects/tpdrops/global/backendServices/tpd-api
name: path-matcher-5
- defaultService: https://www.googleapis.com/compute/v1/projects/tpdrops/global/backendServices/tpd-prod-back
name: path-matcher-6
selfLink: https://www.googleapis.com/compute/v1/projects/tpdrops/global/urlMaps/int-tpd
**gcloud compute target-http-proxies describe int-tpd-target-proxy-2**
ERROR: (gcloud.compute.target-http-proxies.describe) Could not fetch resource:
- The resource 'projects/tpdrops/global/targetHttpProxies/int-tpd-target-proxy-2' was not found
Your load balancer's configuration looks ok; you have a https-ssl-secured frontend on port 443 pointing to a http backend on port 80 which means that SSL is resolved at the load balancer and sent in plain http to your backend.
Error you're getting means (as per documentation) that the client disconnected before load balancer could reply:
client_disconnected_before_any_response - The connection to the client was broken before the load balancer sent any response.
Now to answer your questions.
Since the images are served directly by your app (I didn't see any host-path rules saying otherwise) make sure that application can serve images in time. Set your application response timeout to 10 seconds or more and this should solve the issue. Have a look at this discussion which may be quite usefull for you.
1.1 - there's also a configurable request timeout for Cloud Run services - you can check it by running gcloud run services describe SERVICE_NAME
The backend_service_name: "" string you mentioned may be empty - nothing to worry about - this is an expected behavior.
Additionally have a look at the Backend service timeout Timeouts and retries in external load balancing which may also put some light onto your case.
Lastly - have a look at How to debug failed requests with client_disconnected_before_any_response.

Istio EnvoyFilter HTTP_ROUTE example

I try to write EnvoyFilter for the istio-ingressgateway routes:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: retry
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
name: '*.example.net:8000'
route:
name: 'cfs'
patch:
operation: MERGE
value:
typed_config:
'#type': type.googleapis.com/envoy.config.route.v3.Route
route:
cluster_not_found_response_code: NOT_FOUND
This filter is not working, where did I make a mistake?
Istio v1.9.3
I expect cluster_not_found_response_code: NOT_FOUND to appear in this configuration:
$ istioctl proxy-config route istio-ingressgateway-5abc45c5cb-44l47.istio-system -o json
[
{
"name": "http.8000",
"virtualHosts": [
{
"name": "*.example.net:8000",
"domains": [
"*.example.net",
"*.example.net:8000"
],
"routes": [
{
"name": "cfs",
"match": {
"prefix": "/upload",
"caseSensitive": true
},
"route": {
"cluster": "outbound|8000||cfs.default.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "retriable-status-codes,connect-failure,reset",
"numRetries": 4,
"retryPriority": {
"name": "envoy.retry_priorities.previous_priorities",
"typedConfig": {
"#type": "type.googleapis.com/envoy.config.retry.previous_priorities.PreviousPrioritiesConfig",
"updateFrequency": 2
}
},
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
404
]
},
"cors": {
"allowOriginStringMatch": [
{
"exact": "*"
}
],
"allowMethods": "GET,POST,DELETE,OPTIONS",
"allowHeaders": "Content-Type,Content-Disposition,Origin,Accept",
"maxAge": "86400",
"allowCredentials": false,
"filterEnabled": {
"defaultValue": {
"numerator": 100
}
}
},
"maxStreamDuration": {
"maxStreamDuration": "0s"
}
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking.istio.io/v1alpha3/namespaces/default/virtual-service/cara"
}
}
},
"decorator": {
"operation": "cfs.default.svc.cluster.local:8000/upload*"
},
"responseHeadersToRemove": [
"x-envoy-upstream-service-time"
]
},
...
],
"includeRequestAttemptCount": true
},
...
],
"validateClusters": false
},
...
]
Unable to change any route configuration value cluster_not_found_response_code is just an example.
On my environment , the given EnvoyFilter definition does not pass the schema validation at the CRD level:
CRD validation error while creating EnvoyFilter resource:
Warning: Envoy filter: unknown field "typed_config" in envoy.config.route.v3.Route
envoyfilter.networking.istio.io/retry-faulty created
Seems like there is no such Envoy type in V3 api like Route.
Workaround:
You may try to specific a direct response on VirtualService level, like in this github issue:
This works fine for me:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: retry-faulty
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
name: 'productpage.com:80'
route:
name: 'http.80'
patch:
operation: MERGE
value:
typed_config:
"#type": type.googleapis.com/envoy.config.route.v3.RouteConfiguration
route:
cluster_not_found_response_code: NOT_FOUND
Response headers from istio:
HTTP/1.1 404 Not Found
content-length: 9
content-type: text/plain
date: Fri, 11 Jun 2021 12:46:01 GMT
server: istio-envoy

redis fault injection using istio and envoy filter

I am trying to inject 2s delay to a redis instance (which is not in cluster) using istio.
So, first I am creating an ExternalName k8s service in order to reach external redis so that istio knows about this service. This works. However when I create EnvoyFilter to add fault, I don't see redis_proxy filter in istioctl pc listeners <pod-name> -o json for a pod in same namespace. (and also delay is not introduced)
apiVersion: v1
kind: Namespace
metadata:
name: chaos
labels:
istio-injection: enabled
---
apiVersion: v1
kind: Service
metadata:
name: redis-proxy
namespace: chaos
spec:
type: ExternalName
externalName: redis-external.bla
ports:
- name: tcp
protocol: TCP
port: 6379
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: redis-proxy-filter
namespace: chaos
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
listener:
portNumber: 6379
filterChain:
filter:
name: "envoy.filters.network.redis_proxy"
patch:
operation: MERGE
value:
name: "envoy.filters.network.redis_proxy"
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
faults:
- fault_type: DELAY
fault_enabled:
default_value:
numerator: 100
denominator: HUNDRED
delay: 2s
Can someone give an idea? Thanks.
I tried your yaml in my local istio 1.8.2. Here's few changes that might help you
set PILOT_ENABLE_REDIS_FILTER in istiod env var. otherwise, the filter name will be "name": "envoy.filters.network.tcp_proxy"
add match context
match:
context: SIDECAR_OUTBOUND
use redis protocol
ports:
- name: redis-proxy
port: 6379
appProtocol: redis
I can see the following
% istioctl pc listener nginx.chaos --port 6379 -o json
[
{
"name": "0.0.0.0_6379",
"address": {
"socketAddress": {
"address": "0.0.0.0",
"portValue": 6379
}
},
"filterChains": [
{
"filters": [
{
"name": "envoy.filters.network.redis_proxy",
"typedConfig": {
"#type": "type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy",
"statPrefix": "outbound|6379||redis-proxy.chaos.svc.cluster.local",
"settings": {
"opTimeout": "5s"
},
"latencyInMicros": true,
"prefixRoutes": {
"catchAllRoute": {
"cluster": "outbound|6379||redis-proxy.chaos.svc.cluster.local"
}
},
"faults": [
{
"faultEnabled": {
"defaultValue": {
"numerator": 100
}
},
"delay": "2s"
}
]
}
}
]
}
],
"deprecatedV1": {
"bindToPort": false
},
"trafficDirection": "OUTBOUND"
}
]

gcp ingress fail to be created - Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found

I'm trying a simple ingress in gke.
Following the example from https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
the pods are up and running, services are active. When I create ingress I'm getting
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 48m loadbalancer-controller default/my-ingress
Warning Sync 2m32s (x25 over 48m) loadbalancer-controller Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found
I can't find the source of the problem. Any suggestion of where to look?
I have checked cluster add-ons and permissions
httpLoadBalancing enabled
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/trace.append
NAME READY STATUS RESTARTS AGE
hello-kubernetes-deployment-f6cb6cf4f-kszd9 1/1 Running 0 1h
hello-kubernetes-deployment-f6cb6cf4f-lw49t 1/1 Running 0 1h
hello-kubernetes-deployment-f6cb6cf4f-qqgxs 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-4c2bm 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-dmcqf 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-rnpcc 1/1 Running 0 1h
Name: hello-world
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"hello-world","namespace":"default"},"spec":{"ports":[{"port":6000...
Selector: department=world,greeting=hello
Type: NodePort
IP: 10.59.254.88
Port: <unset> 60000/TCP
TargetPort: 50000/TCP
NodePort: <unset> 30418/TCP
Endpoints: 10.56.2.7:50000,10.56.3.6:50000,10.56.6.4:50000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: hello-kubernetes
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"hello-kubernetes","namespace":"default"},"spec":{"ports":[{"port"...
Selector: department=kubernetes,greeting=hello
Type: NodePort
IP: 10.59.251.189
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32464/TCP
Endpoints: 10.56.2.6:8080,10.56.6.3:8080,10.56.8.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: my-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.9:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* hello-world:60000 (<none>)
/kube hello-kubernetes:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"gce"},"name":"my-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"hello-world","servicePort":60000},"path":"/*"},{"backend":{"serviceName":"hello-kubernetes","servicePort":80},"path":"/kube"}]}}]}}
kubernetes.io/ingress.class: gce
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 107s loadbalancer-controller default/my-ingress
Warning Sync 66s (x15 over 107s) loadbalancer-controller Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found
Pulumi Cluster Config
{
"name": "test-cluster",
"region": "europe-west4",
"addonsConfig": {
"httpLoadBalancing": {
"disabled": false
},
"kubernetesDashboard": {
"disabled": false
}
},
"ipAllocationPolicy": {},
"pools": [
{
"name": "default-pool",
"initialNodeCount": 1,
"nodeConfig": {
"oauthScopes": [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/cloud-platform"
],
"machineType": "n1-standard-1",
"labels": {
"pool": "api-zero"
}
},
"management": {
"autoUpgrade": false,
"autoRepair": true
},
"autoscaling": {
"minNodeCount": 1,
"maxNodeCount": 20
}
},
{
"name": "outbound",
"initialNodeCount": 2,
"nodeConfig": {
"machineType": "custom-1-1024",
"oauthScopes": [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/cloud-platform"
],
"labels": {
"pool": "outbound"
}
},
"management": {
"autoUpgrade": false,
"autoRepair": true
}
}
The author of this post eventually figured out, that issue persist only when cluster is bootstrapped with pulumi.
It looks like you are missing a default backend (L7 - HTTTP LoadBalancer) for your default ingress controller. From what I observed it`s not deployed when you have Istio add-on enabled in your GKE cluster (Istio has its own default ingress/egress gateways).
Please verify if it`s up and running in your cluster:
kubectl get pod -n kube-system | grep l7-default-backend