My use case is to remove query parameters from the path so the envoy ISTIO filter can filter on the basis of just APIs.
I am using the below configuration it is a filtering route but also takes query parameters in the path not truncating it.
The ratelimiter service on its part does not detect any special configuration for the descriptor ("PATH", "/foo?param=value") and therfore use the default of key "PATH".
any idea why truncating regex is not working? Thanks
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: {{ template "name" . }}-httpfilter
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.header_to_metadata
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.header_to_metadata.v3.Config
request_rules:
- header: ':path'
on_header_present:
# use an arbitary name for the namespace
# will be used later to extract descriptor value
metadata_namespace: qry-filter
# use an arbitary key for the metadata
# will be used later to extract descriptor value
key: uri
regex_value_rewrite:
pattern:
# regex matcher
# truncates parameters from path
regex: '^(\/[\/\d\w-]+)\??.*$'
substitution: '\\1'
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: 'envoy.filters.network.http_connection_manager'
subFilter:
name: 'envoy.filters.http.router'
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.ratelimit
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
# ensure the domain matches with the domain used in the ratelimit service config
domain: {{ template "fullname" . }}-ratelimit
failure_mode_deny: true
rate_limit_service:
grpc_service:
envoy_grpc:
# must match load_assignment.cluster_name from the patch to the CLUSTER above
cluster_name: rate_limit_cluster
timeout: 10s
transport_api_version: V3
- applyTo: CLUSTER
match:
cluster:
# kubernetes dns of your ratelimit service
service: ratelimit.{{ .Values.openapi.destinationSuffix }}
patch:
operation: ADD
value:
name: rate_limit_cluster
type: STRICT_DNS
connect_timeout: 10s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
# arbitrary name
cluster_name: rate_limit_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
# kubernetes dns of your ratelimit service
address: ratelimit.{{ .Values.openapi.destinationSuffix }}
port_value: 8081
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: {{ template "name" . }}-virtualhost
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: ""
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions: # any actions in here
- dynamic_metadata:
descriptor_key: PATH
metadata_key:
key: qry-filter
path:
- key: uri
apiVersion: v1
kind: ConfigMap
metadata:
name: ratelimit-config
data:
config.yaml: |
domain: {{ template "fullname" . }}-ratelimit
descriptors:
- key: PATH
rate_limit:
unit: minute
requests_per_unit: 10
I'm using argocd with JumpCloud for SSO. For some reason, it periodically gets into a weird state where clicking on the login button redirects back to the login page, a not found error, or an infinite redirect loop. It's intermittent and can sometimes (but not always) be fixed by deleting cookies, clearing cache, and/or restarting or deleting the argocd-server pods.
charts/argocd/values.yaml
environment: ""
argo-cd:
redis-ha:
enabled: true
controller:
replicas: 1
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: kube-system
additionalLabels:
release: kube-prometheus-stack
nodeSelector:
eks.amazonaws.com/capacityType: ON_DEMAND
args:
appResyncPeriod: 20
resources:
requests:
cpu: 1000m
memory: 768Mi
server:
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: kube-system
additionalLabels:
release: kube-prometheus-stack
nodeSelector:
eks.amazonaws.com/capacityType: ON_DEMAND
resources:
requests:
cpu: 50m
memory: 128Mi
rbacConfigCreate: false
config:
dex.config: |
connectors:
- type: ldap
name: jumpcloud.com
id: ad
config:
# Ldap server address
host: ldap.jumpcloud.com:636
insecureNoSSL: false
insecureSkipVerify: true
# Variable name stores ldap bindDN in argocd-secret
bindDN: "uid=dexservice,ou=Users,o=myorganizationid,dc=jumpcloud,dc=com"
# Variable name stores ldap bind password in argocd-secret
bindPW: "mysecurepassword"
usernamePrompt: Jumpcloud Username
userSearch:
baseDN: ou=Users,o=myorganizationid,dc=jumpcloud,dc=com
username: uid
idAttr: entryDN
emailAttr: mail
nameAttr: cn
groupSearch:
baseDN: ou=Users,o=myorganizationid,dc=jumpcloud,dc=com
filter: "(|(cn=K8S-Admin)(cn=K8S-ReadOnly))"
userMatchers:
- userAttr: entryDN
groupAttr: member
nameAttr: cn
repositories: |
- url: git#github.com:myorgname/myreponame
sshPrivateKeySecret:
name: ssh-private-key-secret
key: id_ed25519
service:
type: NodePort
dex:
nodeSelector:
eks.amazonaws.com/capacityType: ON_DEMAND
redis:
nodeSelector:
eks.amazonaws.com/capacityType: ON_DEMAND
repoServer:
replicas: 2
nodeSelector:
eks.amazonaws.com/capacityType: ON_DEMAND
applicationSet:
replicaCount: 2
configs:
params:
server.insecure: true
charts/argocd/templates/argocd-rbac-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
labels:
helm.sh/chart: {{ (index .Chart.Dependencies 0).Version }}
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-rbac-cm
app.kubernetes.io/part-of: argocd
argocd.argoproj.io/instance: argocd
data:
policy.default: role:none
scopes: '[groups, email]'
policy.csv: |
p, role:none, *, *, */*, deny
g, K8S-Admin, role:admin
g, K8S-ReadOnly, "role:admin"
charts/argocd/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Chart.Name }}
annotations:
"kubernetes.io/ingress.class": "alb"
alb.ingress.kubernetes.io/group.name: {{ .Values.environment }}
"alb.ingress.kubernetes.io/scheme": "internal"
"alb.ingress.kubernetes.io/target-type": "instance"
"alb.ingress.kubernetes.io/listen-ports": '[{"HTTP":80},{"HTTPS":443}]'
"alb.ingress.kubernetes.io/backend-protocol": HTTP
"alb.ingress.kubernetes.io/healthcheck-path": "/"
"alb.ingress.kubernetes.io/actions.ssl-redirect": '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
"external-dns.alpha.kubernetes.io/hostname": argocd.{{ .Values.hostName }}
spec:
rules:
- host: argocd.{{ .Values.hostName }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80
We're currently experiencing similar (weird) issues with our Argo CD stack that utilizes DEX as an LDAP connector. DEX logs shows successful authentication, yet when returning to Argo CD, it intermittently seems to switch back to an OICD based login method and denies access with varying error messages completely unrelated to LDAP.
Recent communication on this topic on Argo's Slack Channel showed that this is not a known issue, but might have to do with running Argo in HA mode, i.e. using two or more argocd-server-instances, which was exactly our setup.
I've scaled down argocd-server to one instance only for the time being. Until now, the issue hasn't resurfaced.
This might not be an exact answer, but rather (hopefully) provide a helpful hint as to where to look for a possible solution. Approval pending. :)
I'm running istio on kubernetes (container istio/proxyv2:1.13.2) and currently use oauth2-proxy pod to authenticate with keycloak. I have a requirement to replace oauth2-proxy with an istio oauth filter, and I'm attempting to deploy the oauth filter to the istio ingressgateway by following this blog. When deploying the YAML below I get see the following error in istiod logs:
2022-05-12T16:59:58.080449Z warn ads ADS:LDS: ACK ERROR istio-ingressgateway-7fd568fc99-fvvcc.istio-system-150 Internal:Error adding/updating listener(s) 0.0.0.0_8443: paths must refer to an existing path in the system: '/etc/istio/config/token-secret.yaml' does not exist
It looks to me that there is an issue loading the secret YAML files into the ingressgateway pod using SDS - but I could be wrong on this as I dont fully understand how this secret loading should be working in this example. I cant find documentation on this in the latest istio versions so as such I'm struggling. Older docs talk about an sds container running in the istiogateway pod but this doesn't seem to be relevant in the more recent istio versions.
Can anyone help, or explain how the secrets get loaded into ingressgateway in the example that I'm following, and what the issue might be/how to diagnose ? Any help gratefully received.
The code is as follows:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: oauth2-ingress
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: CLUSTER
match:
cluster:
service: oauth
patch:
operation: ADD
value:
name: oauth
dns_lookup_family: V4_ONLY
type: LOGICAL_DNS
connect_timeout: 10s
lb_policy: ROUND_ROBIN
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
sni: keycloak.mydomain.com
load_assignment:
cluster_name: oauth
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: keycloak.mydomain.com
port_value: 443
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.jwt_authn"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.oauth2
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.oauth2.v3.OAuth2
config:
token_endpoint:
cluster: oauth
uri: https://keycloak.mydomain.com/auth/realms/myrealm/protocol/openid-connect/token
timeout: 3s
authorization_endpoint: https://keycloak.mydomain.com/auth/realms/myrealm/protocol/openid-connect/auth
redirect_uri: "https://%REQ(:authority)%/callback"
redirect_path_matcher:
path:
exact: /callback
signout_path:
path:
exact: /signout
credentials:
client_id: myclient
token_secret:
name: token
sds_config:
path: "/etc/istio/config/token-secret.yaml"
hmac_secret:
name: hmac
sds_config:
path: "/etc/istio/config/hmac-secret.yaml"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-oauth2
namespace: istio-system
data:
token-secret.yaml: |-
resources:
- "#type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: token
generic_secret:
secret:
inline_string: "myclientsecrettext"
hmac-secret.yaml: |-
resources:
- "#type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: hmac
generic_secret:
secret:
# generated using `head -c 32 /dev/urandom | base64`
inline_bytes: "XYJ7ibKwXwmRrO/yL/37ZV+T3Q/WB+xfhmVlio+wmc0="
---
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: jwt-authentication
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
jwtRules:
- issuer: "https://keycloak.mydomain.com/auth/realms/myrealm"
jwksUri: "https://keycloak.mydomain.com/auth/realms/myrealm/protocol/openid-connect/certs"
forwardOriginalToken: true
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: known-user
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
rules:
- when:
- key: request.headers[Authorization]
notValues:
- 'Bearer*'
- when:
- key: request.auth.audiences
values:
- 'oauth'
- key: request.auth.presenter
values:
I'm using doctrine ORM with Symfony, the PHP framework. I'm getting bizarre behaviour when trying to connect to cloud SQL using GKE.
I'm able to get a connection to the DB via doctrine on command line, for example php bin/console doctrine:database:create is successful and I can see a connection opened in the proxy pod logs.
But when I try and connect to the DB via doctrine in my application I run into this error without fail:
An exception occurred in driver: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known
I have been trying to get my head around this but it doesn't make sense, why would I be able to connect via command line but not in my application?
I followed the documentation here for setting up a db connection using cloud proxy. This is my Kubernetes deployment:
---
apiVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
name: "riptides-api"
namespace: "default"
labels:
app: "riptides-api"
microservice: "riptides"
spec:
replicas: 3
selector:
matchLabels:
app: "riptides-api"
microservice: "riptides"
template:
metadata:
labels:
app: "riptides-api"
microservice: "riptides"
spec:
containers:
- name: "api-sha256"
image: "eu.gcr.io/riptides/api#sha256:ce0ead9d1dd04d7bfc129998eca6efb58cb779f4f3e41dcc3681c9aac1156867"
env:
- name: DB_HOST
value: 127.0.0.1:3306
- name: DB_USER
valueFrom:
secretKeyRef:
name: riptides-mysql-user-skye
key: user
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: riptides-mysql-user-skye
key: password
- name: DB_NAME
value: riptides
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "php bin/console doctrine:migrations:migrate -n"]
volumeMounts:
- name: keys
mountPath: "/app/config/jwt"
readOnly: true
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=riptides:europe-west4:riptides-sql=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: riptides-mysql-service-account
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: keys
secret:
secretName: riptides-api-keys
items:
- key: private.pem
path: private.pem
- key: public.pem
path: public.pem
- name: riptides-mysql-service-account
secret:
secretName: riptides-mysql-service-account
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "riptides-api-hpa"
namespace: "default"
labels:
app: "riptides-api"
microservice: "riptides"
spec:
scaleTargetRef:
kind: "Deployment"
name: "riptides-api"
apiVersion: "apps/v1beta1"
minReplicas: 1
maxReplicas: 5
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 70
If anyone has any suggestions I'd be forever greatful
It doesn't look like anything is wrong with your k8s yaml, but more likely in how you are connecting using Symfony. According to the documentation here, Symfony expect the DB URI to be passed in through an environment variable called "DATABASE_URL". See the following example:
# customize this line!
DATABASE_URL="postgres://db_user:db_password#127.0.0.1:5432/db_name"
This was happening because doctrine was using default values instead of the (should be overriding) environment variables I had set up in my deployment. I changed the environment variable names to be different to the default ones and it works
my yaml file:
kind: ReplicationController
apiVersion: v1
metadata:
name: locust-master
labels:
name: locust
role: master
spec:
replicas: 1
selector:
name: locust
role: master
template:
metadata:
labels:
name: locust
role: master
spec:
containers:
- name: locust
image: gcr.io/MY_PROJECT/locust-tasks:latest
env:
- name: LOCUST_MODE
key: LOCUST_MODE
value: master
- name: TARGET_HOST
key: TARGET_HOST
value: http://MY_WEBSITE.io
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5558
protocol: TCP
running kubectl create -f locust-master-controller.yaml
gives:
error: error validating "locust-master-controller.yaml": error validating data: [found invalid field key for v1.EnvVar, found invalid field key for v1.EnvVar]; if you choose to ignore these errors, turn validation off with --validate=false
I am basically following the instructions word for word on:
https://github.com/GoogleCloudPlatform/distributed-load-testing-using-kubernetes
Just delete these two lines:
key: LOCUST_MODE
and
key: TARGET_HOST
.
There is no key called key in the env section. Complete documentation for env is here..