I try to write EnvoyFilter for the istio-ingressgateway routes:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: retry
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
name: '*.example.net:8000'
route:
name: 'cfs'
patch:
operation: MERGE
value:
typed_config:
'#type': type.googleapis.com/envoy.config.route.v3.Route
route:
cluster_not_found_response_code: NOT_FOUND
This filter is not working, where did I make a mistake?
Istio v1.9.3
I expect cluster_not_found_response_code: NOT_FOUND to appear in this configuration:
$ istioctl proxy-config route istio-ingressgateway-5abc45c5cb-44l47.istio-system -o json
[
{
"name": "http.8000",
"virtualHosts": [
{
"name": "*.example.net:8000",
"domains": [
"*.example.net",
"*.example.net:8000"
],
"routes": [
{
"name": "cfs",
"match": {
"prefix": "/upload",
"caseSensitive": true
},
"route": {
"cluster": "outbound|8000||cfs.default.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "retriable-status-codes,connect-failure,reset",
"numRetries": 4,
"retryPriority": {
"name": "envoy.retry_priorities.previous_priorities",
"typedConfig": {
"#type": "type.googleapis.com/envoy.config.retry.previous_priorities.PreviousPrioritiesConfig",
"updateFrequency": 2
}
},
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
404
]
},
"cors": {
"allowOriginStringMatch": [
{
"exact": "*"
}
],
"allowMethods": "GET,POST,DELETE,OPTIONS",
"allowHeaders": "Content-Type,Content-Disposition,Origin,Accept",
"maxAge": "86400",
"allowCredentials": false,
"filterEnabled": {
"defaultValue": {
"numerator": 100
}
}
},
"maxStreamDuration": {
"maxStreamDuration": "0s"
}
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking.istio.io/v1alpha3/namespaces/default/virtual-service/cara"
}
}
},
"decorator": {
"operation": "cfs.default.svc.cluster.local:8000/upload*"
},
"responseHeadersToRemove": [
"x-envoy-upstream-service-time"
]
},
...
],
"includeRequestAttemptCount": true
},
...
],
"validateClusters": false
},
...
]
Unable to change any route configuration value cluster_not_found_response_code is just an example.
On my environment , the given EnvoyFilter definition does not pass the schema validation at the CRD level:
CRD validation error while creating EnvoyFilter resource:
Warning: Envoy filter: unknown field "typed_config" in envoy.config.route.v3.Route
envoyfilter.networking.istio.io/retry-faulty created
Seems like there is no such Envoy type in V3 api like Route.
Workaround:
You may try to specific a direct response on VirtualService level, like in this github issue:
This works fine for me:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: retry-faulty
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
name: 'productpage.com:80'
route:
name: 'http.80'
patch:
operation: MERGE
value:
typed_config:
"#type": type.googleapis.com/envoy.config.route.v3.RouteConfiguration
route:
cluster_not_found_response_code: NOT_FOUND
Response headers from istio:
HTTP/1.1 404 Not Found
content-length: 9
content-type: text/plain
date: Fri, 11 Jun 2021 12:46:01 GMT
server: istio-envoy
Related
I am configuring Kubernetes based on aws ec2.
I use elasticsearch's packetbeat to get the geometric of clients accessing the service.
Istio is used as the service mesh of Kubernetes, and CLB is used for the load balancer.
I want to know the client ip accessing the service and the domain address the client accesses here.
my packetbeat.yml
setup.dashboards.enabled: true
setup.template.enabled: true
setup.template.settings:
index.number_of_shards: 2
packetbeat.interfaces.device: eth0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.auto_promices_mode: true
packetbeat.interfaces.with_vlans: true
packetbeat.protocols:
- type: dhcpv4
ports: [67, 68]
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80,5601,8081,8002,5000, 8000, 8080, 9200]
send_request: true
send_response: true
send_header: ["User-Agent", "Cookie", "Set-Cookie"]
real_ip_header: "X-Forwarded-For"
- type: mysql
ports: [3306, 3307]
- type: memcache
ports: [11211]
- type: redis
ports: [6379]
- type: pgsql
ports: [5432]
- type: thrift
ports: [9090]
- type: mongodb
ports: [27017]
- type: cassandra
ports: [9042]
- type: tls
ports: [443, 993, 995, 5223, 8443, 8883,8883, 9243, 15021, 15443, 32440]
send_request: true
send_response: true
send_all_headers: true
include_body_for: ["text/html", "application/json"]
packetbeat.procs.enabled: true
packetbeat.flows:
timeout: 30s
period: 10s
fields: ["server.domain"]
processors:
- include_fields:
fields:
- source.ip
- server.domain
- add_dokcer_metadata:
- add_host_metadata:
- add_cloud_metadata:
- add_kubernetes_metadata:
host: ${HOSTNAME}
indexers:
- ip_port:
matchers:
- field_format:
format: '%{[ip]}:%{[port]}'
# with version 7 of Packetbeat use the following line instead of the one above.
#format: '%{[destination.ip]}:%{[destination.port]}'
output.elasticsearch:
hosts: ${ELASTICSEARCH_ADDRESS}
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
pipeline: geoip-info
setup.kibana:
host: 'https://myhost:443'
my CLB listener
CLB has enabled proxy protocol.
But the packet beat doesn't bring me the data I want.
search for tls log
"client": {
"port": 1196,
"ip": "10.0.0.83"
},
"network": {
"community_id": "1:+ARNMwsOGxkBkrmWfCVawtA1GKo=",
"protocol": "tls",
"transport": "tcp",
"type": "ipv4",
"direction": "egress"
},
"destination": {
"port": 8443,
"ip": "10.0.1.77",
"domain": "my host domain"
},
search for flow.final: true
"event": {
"duration": 1051434189423,
"kind": "event",
"start": "2022-10-28T05:25:14.171Z",
"action": "network_flow",
"end": "2022-10-28T05:42:45.605Z",
"category": [
"network_traffic",
"network"
],
"type": [
"connection"
],
"dataset": "flow"
},
"source": {
"geo": {
"continent_name": "Asia",
"region_iso_code": "KR",
"city_name": "Has-si",
"country_iso_code": "KR",
"country_name": "South Korea",
"region_name": "Gg",
"location": {
"lon": 126.8168,
"lat": 37.2072
}
},
"port": 50305,
"bytes": 24174,
"ip": "my real ip address",
"packets": 166
},
I can find out if I search separately, but there are no two points of contact.
I would like to see the log of the above two combined.
The domain the client accesses + real client ip.
please help me..
How Can I setup a terraform external-dns config for multiple environments (dev/staging/pre-prod)
module "eks-external-dns" {
source = "lablabs/eks-external-dns/aws"
version = "1.0.0"
namespace = "kube-system"
cluster_identity_oidc_issuer = module.eks.cluster_oidc_issuer_url
cluster_identity_oidc_issuer_arn = module.eks.oidc_provider_arn
settings = {
"policy" = "sync"
"source"= "service"
"source"= "ingress"
"log-level"= "verbose"
"log-format"= "text"
"interval"= "1m"
"provider" = "aws"
"aws-zone-type" = "public"
"registry" = "txt"
"txt-owner-id" = "XXXXXXXXXXXXXX"
}
}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "3"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:xxx:certificate/aaaa-bbb-ccc-dd-ffff
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/success-codes: "200"
alb.ingress.kubernetes.io/tags: createdBy=aws-controller
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
external-dns.alpha.kubernetes.io/hostname: keycloak-ingress-controller
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/load-balancer-name: acme-lb
alb.ingress.kubernetes.io/group.name: acme-group
name: keycloak-ingress-controller
spec:
rules:
- host: dev.keycloak.acme.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: keycloak
port:
number: 8080
In my current situation, only my x.domain is processed by external-dns
I want to be able to let it work with urls like dev.myapp.example.com staging.myapp.example.com ...
I have resolved it by using helm values directly instead of settings.
module "eks-external-dns" {
source = "lablabs/eks-external-dns/aws"
version = "1.0.0"
# insert the 2 required variables here
namespace = "kube-system"
cluster_identity_oidc_issuer = module.eks.cluster_oidc_issuer_url
cluster_identity_oidc_issuer_arn = module.eks.oidc_provider_arn
values = yamlencode({
"sources" : ["service", "ingress"]
"logLevel" : "debug"
"provider" : "aws"
"registry" : "txt"
"txtOwnerId" : "xxxx"
"txtPrefix" : "external-dns"
"policy" : "sync"
"domainFilters" : [
"acme.com"
]
"publishInternalServices" : "true"
"triggerLoopOnEvent" : "true"
"interval" : "15s"
"podLabels" : {
"app" : "aws-external-dns-helm"
}
})
}
I am not sure, whether we are talking about the smae issue. I will try to explain the scenario which I am trying. I have 2 services appservice-A and appservice-B, both are in the same namespace "mynamespace" and both have seperate Virtual service called VS-A and VS-B.
In MyAPPLICATION, there is a call from appservice-A to appservice-B , and I want to disable the retry when the ServiceB throwing a 503 error, and the ServiceA should not retry again. So what I enabled is in the VS-B, I added the retry-attempt to 0, by expecting that appservice-A wont retry an attempt if the appservice-B throws a 503 error.. Which is not working for me.
I performed a testing again with the sample for you by making the attempt 0 in the VS-B. but still there is no change is happening to the retries, still showing 2.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
labels:
app.kubernetes.io/managed-by: Helm
name: appservice-B
namespace: mynamespace
spec:
gateways:
- istio-system/ingress-gateway
hosts:
- appservice-B.mynamespace.svc.cluster.local
- appservice-B.mycompany.com
http:
- match:
- uri:
prefix: /
retries:
attempts: 0
route:
- destination:
host: appservice-B.mynamespace.svc.cluster.local
port:
number: 8080
subset: v1
this is the proxy config routing rule out put which generated from appservice-A
istioctl proxy-config route appservice-A-6dbb74bc88-dffb8 -n mynamespace -o json
"routes": [
{
"name": "default",
"match": {
"prefix": "/"
},
"route": {
"cluster": "outbound|8080||appservice-B.mynamespace.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxStreamDuration": {
"maxStreamDuration": "0s",
"grpcTimeoutHeaderMax": "0s"
}
},
"decorator": {
"operation": "appservice-B.mynamespace.svc.cluster.local:8080/*"
}
}
],
"includeRequestAttemptCount": true
So, suspect that , whether I am trying right things for my scenario, If this also right, any workaround like Is there achange the property of status codes or headers from responses to disable, so that istio wont retry appservice-B from appservice-A if a 503 errorcode got from appservice-B ?
I am trying to inject 2s delay to a redis instance (which is not in cluster) using istio.
So, first I am creating an ExternalName k8s service in order to reach external redis so that istio knows about this service. This works. However when I create EnvoyFilter to add fault, I don't see redis_proxy filter in istioctl pc listeners <pod-name> -o json for a pod in same namespace. (and also delay is not introduced)
apiVersion: v1
kind: Namespace
metadata:
name: chaos
labels:
istio-injection: enabled
---
apiVersion: v1
kind: Service
metadata:
name: redis-proxy
namespace: chaos
spec:
type: ExternalName
externalName: redis-external.bla
ports:
- name: tcp
protocol: TCP
port: 6379
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: redis-proxy-filter
namespace: chaos
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
listener:
portNumber: 6379
filterChain:
filter:
name: "envoy.filters.network.redis_proxy"
patch:
operation: MERGE
value:
name: "envoy.filters.network.redis_proxy"
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
faults:
- fault_type: DELAY
fault_enabled:
default_value:
numerator: 100
denominator: HUNDRED
delay: 2s
Can someone give an idea? Thanks.
I tried your yaml in my local istio 1.8.2. Here's few changes that might help you
set PILOT_ENABLE_REDIS_FILTER in istiod env var. otherwise, the filter name will be "name": "envoy.filters.network.tcp_proxy"
add match context
match:
context: SIDECAR_OUTBOUND
use redis protocol
ports:
- name: redis-proxy
port: 6379
appProtocol: redis
I can see the following
% istioctl pc listener nginx.chaos --port 6379 -o json
[
{
"name": "0.0.0.0_6379",
"address": {
"socketAddress": {
"address": "0.0.0.0",
"portValue": 6379
}
},
"filterChains": [
{
"filters": [
{
"name": "envoy.filters.network.redis_proxy",
"typedConfig": {
"#type": "type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy",
"statPrefix": "outbound|6379||redis-proxy.chaos.svc.cluster.local",
"settings": {
"opTimeout": "5s"
},
"latencyInMicros": true,
"prefixRoutes": {
"catchAllRoute": {
"cluster": "outbound|6379||redis-proxy.chaos.svc.cluster.local"
}
},
"faults": [
{
"faultEnabled": {
"defaultValue": {
"numerator": 100
}
},
"delay": "2s"
}
]
}
}
]
}
],
"deprecatedV1": {
"bindToPort": false
},
"trafficDirection": "OUTBOUND"
}
]
I'm trying a simple ingress in gke.
Following the example from https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
the pods are up and running, services are active. When I create ingress I'm getting
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 48m loadbalancer-controller default/my-ingress
Warning Sync 2m32s (x25 over 48m) loadbalancer-controller Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found
I can't find the source of the problem. Any suggestion of where to look?
I have checked cluster add-ons and permissions
httpLoadBalancing enabled
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/trace.append
NAME READY STATUS RESTARTS AGE
hello-kubernetes-deployment-f6cb6cf4f-kszd9 1/1 Running 0 1h
hello-kubernetes-deployment-f6cb6cf4f-lw49t 1/1 Running 0 1h
hello-kubernetes-deployment-f6cb6cf4f-qqgxs 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-4c2bm 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-dmcqf 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-rnpcc 1/1 Running 0 1h
Name: hello-world
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"hello-world","namespace":"default"},"spec":{"ports":[{"port":6000...
Selector: department=world,greeting=hello
Type: NodePort
IP: 10.59.254.88
Port: <unset> 60000/TCP
TargetPort: 50000/TCP
NodePort: <unset> 30418/TCP
Endpoints: 10.56.2.7:50000,10.56.3.6:50000,10.56.6.4:50000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: hello-kubernetes
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"hello-kubernetes","namespace":"default"},"spec":{"ports":[{"port"...
Selector: department=kubernetes,greeting=hello
Type: NodePort
IP: 10.59.251.189
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32464/TCP
Endpoints: 10.56.2.6:8080,10.56.6.3:8080,10.56.8.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: my-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.9:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* hello-world:60000 (<none>)
/kube hello-kubernetes:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"gce"},"name":"my-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"hello-world","servicePort":60000},"path":"/*"},{"backend":{"serviceName":"hello-kubernetes","servicePort":80},"path":"/kube"}]}}]}}
kubernetes.io/ingress.class: gce
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 107s loadbalancer-controller default/my-ingress
Warning Sync 66s (x15 over 107s) loadbalancer-controller Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found
Pulumi Cluster Config
{
"name": "test-cluster",
"region": "europe-west4",
"addonsConfig": {
"httpLoadBalancing": {
"disabled": false
},
"kubernetesDashboard": {
"disabled": false
}
},
"ipAllocationPolicy": {},
"pools": [
{
"name": "default-pool",
"initialNodeCount": 1,
"nodeConfig": {
"oauthScopes": [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/cloud-platform"
],
"machineType": "n1-standard-1",
"labels": {
"pool": "api-zero"
}
},
"management": {
"autoUpgrade": false,
"autoRepair": true
},
"autoscaling": {
"minNodeCount": 1,
"maxNodeCount": 20
}
},
{
"name": "outbound",
"initialNodeCount": 2,
"nodeConfig": {
"machineType": "custom-1-1024",
"oauthScopes": [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/cloud-platform"
],
"labels": {
"pool": "outbound"
}
},
"management": {
"autoUpgrade": false,
"autoRepair": true
}
}
The author of this post eventually figured out, that issue persist only when cluster is bootstrapped with pulumi.
It looks like you are missing a default backend (L7 - HTTTP LoadBalancer) for your default ingress controller. From what I observed it`s not deployed when you have Istio add-on enabled in your GKE cluster (Istio has its own default ingress/egress gateways).
Please verify if it`s up and running in your cluster:
kubectl get pod -n kube-system | grep l7-default-backend