There's a need to apply a request body size limit to certain domain names via Traefik.
The Traefik middleware is:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: request-limits
spec:
buffering:
maxRequestBodyBytes: 10485760
retryExpression: "IsNetworkError() && Attempts() < 2"
And this can be applied globally via:
additionalArguments:
- --entrypoints.websecure.http.middlewares=traefik-request-limits#kubernetescrd
How can this middleware be applied to certain domain names?
What I've tried - remove the above additionalArguments, and replace it with:
http:
routers:
http-specific:
rule: "HostRegexp(`{name:(.*-)(service|mock|proxy)\\.(.*)\\.(example\\.com}`)"
entrypoints:
- websecure
middlewares:
- request-limits
service:
- noop#internal
However - the above route is not getting created.
Any tips or pointers would be much appreciated.
Traefik Feature Status
List item
https://github.com/traefik/traefik/issues/5098
Related
I'm struggling on an Ingress configuration in yaml because the pattern matching seems not to work.
I would like the frontend-lb ClusterIP Service for the frontend deployment to respond to any of these:
https://example.com
https://example.com/home
https://example.com/login
... any other without /api/
And the backend-lb ClusterIP Service for the backend deployment to respond to any of these:
https://example.com/api/...
The yaml for the ingress rules is the following:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: example-com-tls
hosts:
- example.com
rules:
- host: example.com
http:
paths:
- path: /api
backend:
serviceName: gateway-lb
servicePort: 80
- path: /
backend:
serviceName: frontend-lb
servicePort: 80
The result is that every backend url is recognized as a frontend url and gives back 404 Not Found.
I've tried many other regexp and also I've tried to exclude /api in the frontend path (!?api).* but with no success.
UPDATE:
What I see with the logs it's like in the backend the url path gets blanked because if I call:
https://example.com/api/javalin-api-gateway/login
I get this error:
Not found. Request is below context-path (context-path:
'/javalin-api-gateway')
While when I call the frontend with a specific url path:
https://example.com/home
The /home controller is effectively called (it doesn't get blanked).
If I call the backend service directly (if the service is a LoadBalancer) with the same url:
http://192.168.64.17:31186/javalin-api-gateway/login
I get the right response, signal that the backend part is working properly.
How is possible that only the backend service doesn't receive the complete path?
Problem
We run Istio on our Kubernetes cluster and we're implementing AuthorizationPolicies.
We want to apply a filter on email address, an HTTP-condition only applicable to HTTP services.
Our Kiali service should be an HTTP service (it has an HTTP port, an HTTP listener, and even has HTTP conditions applied to its filters), and yet the AuthorizationPolicy does not work.
What gives?
Our setup
We have a management namespace with an ingressgateway (port 443), and a gateway+virtual service for Kiali.
These latter two point to the Kiali service in the kiali namespace.
Both the management and kiali namespace have a deny-all policy and an allow policy to make an exception for particular users.
(See AuthorizationPolicy YAMLs below.)
Authorization on the management ingress gateway works.
The ingress gateway has 3 listeners, all HTTP, and HTTP conditions are created and applied as you would expect.
You can visit its backend services other than Kiali if you're on the email list, and you cannot do so if you're not on the email list.
Authorization on the Kiali service does not work.
It has 99 listeners (!), including an HTTP listener on its configured 20001 port and its IP, but it does not work.
You cannot visit the Kiali service (due to the default deny-all policy).
The Kiali service has port 20001 enabled and named 'http-kiali', so the VirtualService should be ok with that. (See YAMls for service and virtual service below).
EDIT: it was suggested that the syntax of the email values matters.
I think that has been taken care of:
in the management namespace, the YAML below works as expected
in the kiali namespace, the same YAML fails to work as expected.
the empty brackets in the 'property(map[request.auth.claims[email]:{[brackets#test.com] []}])' message I think are the Values (present) and NotValues (absent), respectively, as per 'constructed internal model: &{Permissions:[{Properties:[map[request.auth.claims[email]:{Values:[brackets#test.com] NotValues:[]}]]}]}'
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: testpolicy-brackets
namespace: kiali
spec:
action: ALLOW
rules:
- when:
- key: source.namespace
values: ["brackets"]
- key: request.auth.claims[email]
values: ["brackets#test.com"]
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: testpolicy-yamllist
namespace: kiali
spec:
action: ALLOW
rules:
- when:
- key: source.namespace
values:
- list
- key: request.auth.claims[email]
values:
- list#test.com
debug rbac found authorization allow policies for workload [app=kiali,pod-template-hash=5c97c4bb66,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=kiali,service.istio.io/canonical-revision=v1.16.0,version=v1.16.0] in kiali
debug rbac constructed internal model: &{Permissions:[{Services:[] Hosts:[] NotHosts:[] Paths:[] NotPaths:[] Methods:[] NotMethods:[] Ports:[] NotPorts:[] Constraints:[] AllowAll:true v1beta1:true}] Principals:[{Users:[] Names:[] NotNames:[] Group: Groups:[] NotGroups:[] Namespaces:[] NotNamespaces:[] IPs:[] NotIPs:[] RequestPrincipals:[] NotRequestPrincipals:[] Properties:[map[source.namespace:{Values:[brackets] NotValues:[]}] map[request.auth.claims[email]:{Values:[brackets#test.com] NotValues:[]}]] AllowAll:false v1beta1:true}]}
debug rbac generated policy ns[kiali]-policy[testpolicy-brackets]-rule[0]: permissions:<and_rules:<rules:<any:true > > > principals:<and_ids:<ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"source.principal" > value:<string_match:<safe_regex:<google_re2:<> regex:".*/ns/brackets/.*" > > > > > > > ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"request.auth.claims" > path:<key:"email" > value:<list_match:<one_of:<string_match:<exact:"brackets#test.com" > > > > > > > > > >
debug rbac ignored HTTP principal for TCP service: property(map[request.auth.claims[email]:{[brackets#test.com] []}])
debug rbac role skipped for no principals found
debug rbac found authorization allow policies for workload [app=kiali,pod-template-hash=5c97c4bb66,security.istio.io/tlsMode=istio,service.istio.io/canonical-name=kiali,service.istio.io/canonical-revision=v1.16.0,version=v1.16.0] in kiali
debug rbac constructed internal model: &{Permissions:[{Services:[] Hosts:[] NotHosts:[] Paths:[] NotPaths:[] Methods:[] NotMethods:[] Ports:[] NotPorts:[] Constraints:[] AllowAll:true v1beta1:true}] Principals:[{Users:[] Names:[] NotNames:[] Group: Groups:[] NotGroups:[] Namespaces:[] NotNamespaces:[] IPs:[] NotIPs:[] RequestPrincipals:[] NotRequestPrincipals:[] Properties:[map[source.namespace:{Values:[list] NotValues:[]}] map[request.auth.claims[email]:{Values:[list#test.com] NotValues:[]}]] AllowAll:false v1beta1:true}]}
debug rbac generated policy ns[kiali]-policy[testpolicy-yamllist]-rule[0]: permissions:<and_rules:<rules:<any:true > > > principals:<and_ids:<ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"source.principal" > value:<string_match:<safe_regex:<google_re2:<> regex:".*/ns/list/.*" > > > > > > > ids:<or_ids:<ids:<metadata:<filter:"istio_authn" path:<key:"request.auth.claims" > path:<key:"email" > value:<list_match:<one_of:<string_match:<exact:"list#test.com" > > > > > > > > > >
debug rbac ignored HTTP principal for TCP service: property(map[request.auth.claims[email]:{[list#test.com] []}])
debug rbac role skipped for no principals found
(Follows: a list of YAMLs mentioned above)
# Cluster AuthorizationPolicies
## Management namespace
Name: default-deny-all-policy
Namespace: management
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
---
Name: allow-specified-email-addresses
Namespace: management
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
Action: ALLOW
Rules:
When:
Key: request.auth.claims[email]
Values:
my.email#my.provider.com
---
## Kiali namespace
Name: default-deny-all-policy
Namespace: kiali
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
---
Name: allow-specified-email-addresses
Namespace: kiali
API Version: security.istio.io/v1beta1
Kind: AuthorizationPolicy
Spec:
Action: ALLOW
Rules:
When:
Key: request.auth.claims[email]
Values:
my.email#my.provider.com
---
# Kiali service YAML
apiVersion: v1
kind: Service
metadata:
labels:
app: kiali
version: v1.16.0
name: kiali
namespace: kiali
spec:
clusterIP: 10.233.18.102
ports:
- name: http-kiali
port: 20001
protocol: TCP
targetPort: 20001
selector:
app: kiali
version: v1.16.0
sessionAffinity: None
type: ClusterIP
---
# Kiali VirtualService YAML
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: kiali-virtualservice
namespace: management
spec:
gateways:
- kiali-gateway
hosts:
- our_external_kiali_url
http:
- match:
- uri:
prefix: /
route:
- destination:
host: kiali.kiali.svc.cluster.local
port:
number: 20001
Marking as solved: I had forgotten to apply a RequestAuthentication to the Kiali namespace.
The problematic situation, with the fix in bold:
RequestAuthentication on the management namespace adds a user JWT (through an EnvoyFilter that forwards requests to an authentication service)
AuthorizationPolicy on the management namespace checks the request.auth.claims[email]. These fields exist in the JWT and all is well.
RequestAuthentication on the Kiali namespace missing. I fixed the problem by adding a RequestAuthentication for the Kiali namespace, which populates the user information, which allows the AuthorizationPolicy to perform its checks on actually existing fields.
AuthorizationPolicy on the Kiali namespace also checks the request.auth.claims[email] field, but since there is no authentication, there is no JWT with these fields. (There are some fields populated, e.g. source.namespace, but nothing like a JWT.) Hence, user validation on that field fails, as you would expect.
According to istio documentation:
Unsupported keys and values are silently ignored.
In your debug log there is:
debug rbac ignored HTTP principal for TCP service: property(map[request.auth.claims[email]:{[my.email#my.provider.com] []}])
As You can see there are []}] chars there that might suggest that the value got parsed the wrong way and got ignored as unsupported value.
Try to put Your values like suggested in documentation inside [""]:
request.auth.claims
Claims from the origin JWT. The actual claim name is surrounded by brackets HTTP only
key: request.auth.claims[iss]
values: ["*#foo.com"]
Hope it helps.
I created the API using GKE and Cloud Endpoint gRPC everything is fine but when I try to access my API from Endpoints Portal this is not working.
EndPoint Portal For API
Enter any id in ayah_id and try to execute you will see error.
ENOTFOUND: Error resolving domain "https://quran.endpoints.utopian-button-227405.cloud.goog"
I don't know why this is not working even my API is running successfully on http://34.71.56.199/v1/image/ayah/ayah-1 I'm using Http Transcoding actual gRPC running on 34.71.56.199:81
I think I missed some configuration steps. Can someone please let me know what I miss.
Update
api_config.yaml
# The configuration schema is defined by service.proto file
# https://github.com/googleapis/googleapis/blob/master/google/api/service.proto
type: google.api.Service
config_version: 3
name: quran.endpoints.utopian-button-227405.cloud.goog
usage:
rules:
# Allow unregistered calls for all methods.
- selector: "*"
allow_unregistered_calls: true
#
# API title to appear in the user interface (Google Cloud Console).
#
title: Quran gRPC API
apis:
- name: quran.Audio
- name: quran.Ayah
- name: quran.Edition
- name: quran.Image
- name: quran.Surah
- name: quran.Translation
Update 2
api_config.yaml
# The configuration schema is defined by service.proto file
# https://github.com/googleapis/googleapis/blob/master/google/api/service.proto
type: google.api.Service
config_version: 3
name: quran.endpoints.utopian-button-227405.cloud.goog
endpoints:
- name: quran.endpoints.utopian-button-227405.cloud.goog
target: "34.71.56.199"
usage:
rules:
# Allow unregistered calls for all methods.
- selector: "*"
allow_unregistered_calls: true
#
# API title to appear in the user interface (Google Cloud Console).
#
title: Quran gRPC API
apis:
- name: quran.Audio
- name: quran.Ayah
- name: quran.Edition
- name: quran.Image
- name: quran.Surah
- name: quran.Translation
api_config_http.yaml
# The configuration schema is defined by service.proto file
# https://github.com/googleapis/googleapis/blob/master/google/api/service.proto
type: google.api.Service
config_version: 3
name: quran.endpoints.utopian-button-227405.cloud.goog
#
# Http Transcoding.
#
# HTTP rules define translation from HTTP/REST/JSON to gRPC. With these rules
# HTTP/REST/JSON clients will be able to call the Quran service.
#
http:
rules:
#
# Image Service transcoding
#
- selector: quran.Image.CreateImage
post: '/v1/image'
body: '*'
- selector: quran.Image.FindImageByAyahId
get: '/v1/image/ayah/{id}'
I am trying to host a django website on Azure kubernetes service behide nginx-ingress, and I would like my django web show under a path.
e.g. when access the default admin site, I would like to access it at http://example.com/django/admin instead of http://example.com/admin
I tried the configure below, when I access http://example.com/django/admin it will forward me to http://example.com/admin and show me 404 error from default ingress backend, as I set django debug to ture I assume this mean ingress did not send my request to my django service
# path example
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: django-ingress
labels:
app: django
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- backend:
serviceName: django-service
servicePort: 80
path: /django(/|$)(.*)
so I try to curl -I -k http://example.com/django/admin, and it show something like below
HTTP/1.1 301 Moved Permanently
Server: openresty/1.15.8.2
Date: Wed, 06 Nov 2019 04:14:14 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Location: /admin/
The same thing happen to any valid page in the site, if I curl -I -k http://example.com/django/any_valid_page it show below:
HTTP/1.1 301 Moved Permanently
Server: openresty/1.15.8.2
Date: Wed, 06 Nov 2019 04:14:14 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Location: /any_valid_page/
I wonder it is caused by I am doing the test with the default django development web server? (i.e. python manage.py runserver)
If I try to host it at root like below, everything is fine...
# root example
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: django-ingress
labels:
app: django
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- backend:
serviceName: django-service
servicePort: 80
path: /
Trying adding this
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: django-ingress
labels:
app: django
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /django
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- backend:
serviceName: django-service
servicePort: 80
path: /django
Starting in Version 0.22.0, ingress definitions using the annotation
nginx.ingress.kubernetes.io/rewrite-target are not backwards
compatible with previous versions. In Version 0.22.0 and beyond, any
substrings within the request URI that need to be passed to the
rewritten path must explicitly be defined in a capture group. So make
sure you have right version.
When using SSL offloading outside of cluster it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" annotation in the particular resource.
I think your Ingress configuration file should look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: django-ingress
labels:
app: django
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: example.com
http:
paths:
- path: /django(/|$)(.*)
backend:
serviceName: django-service
servicePort: 80
If you get 404 error, there is possible solution:
Please change https to http in the curl command?:
curl --resolve your-host:80:xx.xxx.xx.xxx http://my-host:80
To get the IP from kubectl get ing command, it is necessary to
enable the reporting Ingress status feature. Take a look on: reporting-ingress-status.
There is the default server in the Ingress controller. It returns
the Not Found page with the 404 status code for all requests for
domains for which there are no Ingress rules defined. Those requests
are not shown in the access log.
Since you're getting a 404, this means that the host header of your
requests doesn't match with the host field in the Ingress resource.
To set the host header in curl, please see previous curl
commands. Optionally, you can also do:
curl http://<ip> -H "host: example.com"
Please take a look on ngnix-ingress, server-side-https-enforcement-nginx.
This is a problem from Django's side. Whenever the admin is not logged in, the /django/admin results in a redirect to /admin/. In this case, if you just replace /django/admin/ with /django/admin/ in the browser URL field it will work and open django admin login.
So basically Django's built-in redirect conflicts with the Ingress's rewrite module.
How can I use a part of matched URI in destination rule in istio?
Trying to achieve something like this:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
spec:
http:
- match:
- uri:
regex: "^/foo/(.+)/?$"
route:
- destination:
host: bar-$1
port:
number: 80
As far as I know this is not possible at all.
Cant provide you any confirmation link on this.
The similar question was here in the past:
can istio support route to different service by dynamic part of uri path