According to their documentation of TLSRoute it does not include rewrite that you typically do for HTTPRoute`
Is it possible to do the same for HTTPS or TLS ?
http:
- match:
- uri:
prefix: /ratings
rewrite:
uri: /v1/bookRatings
route:
- destination:
host: ratings.prod.svc.cluster.local
When TLS traffic is not terminated, traffic (including the URL parts needed for rewrite) is encrypted and therefore Istio can't read or manipulate it. Therefore, these options are not available.
Related
I have a React app (hosted on Cloudflare Pages) consuming a Flask API which I deployed on DigitalOcean App platform. I am using custom domains for both, app.example.com and api.example.com respectively.
When I try to use the app through the domain provided by Cloudflare Pages, my-app.pages.dev, I have no issues.
But when I try to use it through my custom domain app.example.com, I see that certain headers get stripped in the response to the preflight OPTIONS request. These are
access-control-allow-credentials: true
access-control-allow-headers: content-type
access-control-allow-methods: DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT
allow: POST, OPTIONS
vary: Origin
This causes issues with CORS, as displayed in the browser console:
Access to XMLHttpRequest at 'https://api.example.com/auth/login' from origin 'https://app.example.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The value of the 'Access-Control-Allow-Credentials' header in the response is '' which must be 'true' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.
Users can login without any issues on Cloudflare provided domain my-app.pages.dev, but whenever they try to login through the custom domain, they receive this error.
Another detail: the only difference between the preflight request in two cases is, the browser sets the following on app.example.com
origin: https://app.example.com
referer: https://app.example.com/login
sec-fetch-site: same-site
And the following on my-app.pages.dev
origin: https://my-app.pages.dev
referer: https://my-app.pages.dev/login
sec-fetch-site: cross-site
I am using Flask-CORS with support_credentials=True to handle CORS on the API, and axios with {withCredentials: true} to consume the API on the frontend.
Is this due to a Cloudflare policy that I'm not aware of? Does anyone have a clue?
I just solved this problem. It was due to the App Spec on DigitalOcean. I had CORS specific setting on the YAML file:
I changed
- cors:
allow_headers:
- '*'
allow_methods:
- GET
- OPTIONS
- POST
- PUT
- PATCH
- DELETE
allow_origins:
- prefix: https://app.example.com # <== I removed this line
- regex: https://*.example.com
- regex: http://*.example.com
to
- cors:
allow_headers:
- '*'
allow_methods:
- GET
- OPTIONS
- POST
- PUT
- PATCH
- DELETE
allow_origins:
- regex: https://*.example.com
- regex: http://*.example.com
For reference, this is the cURL command I used to debug the problem:
curl -I -XOPTIONS https://api.example.com \
-H 'origin: https://app.example.com' \
-H 'access-control-request-method: GET'
So it wasn't due to Cloudflare. Funny enough, DigitalOcean App Platform traffic goes through Cloudflare by default which added to my confusion.
I'm looking at using canary deployments in Istio but it seems it randomly distributes requests to new and old versions based on a weighting. This implies a user in the business could see one behaviour one minute and different behaviour the next minute & people within teams could experience different behaviour to each other. For this reason it seems if I want consistent behaviour for a user or team I need to build my own roll-out mechanism where I can control who moves on to the new service version.
Am I correct or am I misunderstanding how Istio canary rollout works?
If you do a basic traffic distribution by weight, you are correct.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- helloworld
http:
- route:
- destination:
host: helloworld
subset: v1
weight: 90
- destination:
host: helloworld
subset: v2
weight: 10
Here 10 % of the traffic is routed to v2 randomly. Any request might call a different version.
But you can do more sophisticated routing.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- helloworld
http:
- match:
- headers:
group:
exact: testing
route:
- destination:
host: helloworld
subset: v2
- route:
- destination:
host: helloworld
subset: v1
Now there are two routes:
The users with a header group=testing will be send to v2
All other users will be send to v1
The header in this example could be set in the frontend based on the user, so backend requests for that user will call v2.
Or you could set a cookie for a specific group and route them to a different frontend by using something like:
- match:
- headers:
cookie:
[...]
There are multiple match criteria, including headers, queryParams and authority.
I'm pretty excited about HTTP Gateways due to the drastically reduced pricing in comparison to REST Gateways, but I'm stuck on creating routes that do not completely blow up my serverless.yml file.
The documentation for HTTP Gateway at Serverless describes this to define routes:
functions:
params:
handler: handler.params
events:
- httpApi:
method: GET
path: /get/for/any/{param}
There is a support for '*', but this causes issues with OPTIONS cause those will overwrite the created CORS policies (so OPTIONS requests would actually get to the application, which does not make any sense, especially if the route is protected via an authorizer).
Also, it's not possible to define multiple methods.
# does not work
method: GET,POST,DELETE
# also not possible
method:
- GET
- POST
- DELETE
The only configuration I found is to define all routes separately:
events:
- httpApi:
path: /someApi/{proxy+}
method: GET
- httpApi:
path: /someApi/{proxy+}
method: POST
- httpApi:
path: /someApi/{proxy+}
method: DELETE
This works fine & the UI is even bundling the routes cause they're on the same prefixed path:
But with this I have to define all HTTP methods for all my main resources separately including the attached authorizer.
Is there some way to combine this?
I successfully implemented in my aks cluster an ingress with tls certification (https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls), but I would like to pass the information contained in the client certificate to the backend.
I did try adding to my ingress the annotation nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true", but the information seems to be missing in my request headers (I am simply printing the content of request.headers from my flask application). Other headers are correctly shown, e.g. X-Forwarded-Proto: https or X-Forwarded-Port: 443 .
Could somebody confirm the expected behaviour of the annotation?
Do I need to configure the backend somehow with tls as well?
EDIT
I did access the ingress pod, and in the nginx config I could not find any reference to ssl_client_s_dn, which I would expect to be the best candidate to pass the certificate info into an header.
I tried to assign some custom headers following the steps in https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers, but also this seems not to work.
Which version of nginx-ingress are you using ?
At least with version 0.30 I'm able too see the client's certificate details passed to the backend properly.
The value of ssl_client_s_dn is being passed as Ssl-Client-Subject-Dn header with default nginx controller setup, no customization needed.
Here is the content of my default /etc/nginx/nginx.conf (converted from ConfigMap)
# Pass the extracted client certificate to the backend
proxy_set_header ssl-client-cert $ssl_client_escaped_cert;
proxy_set_header ssl-client-verify $ssl_client_verify;
proxy_set_header ssl-client-subject-dn $ssl_client_s_dn;
proxy_set_header ssl-client-issuer-dn $ssl_client_i_dn;
Request headers seen from backend perspective:
...
"Ssl-Client-Issuer-Dn": "CN=example.com,O=example Inc.",
"Ssl-Client-Subject-Dn": "O=client organization,CN=client.example.com",
"Ssl-Client-Verify": "SUCCESS",
"User-Agent": "curl/7.58.0",
"X-Forwarded-Host": "httpbin.example.com",
"X-Scheme": "https",
}
}
You can always add your own custom headers, as explained here
Example:
apiVersion: v1
data:
X-Client-Cert-Info: $ssl_client_s_dn
kind: ConfigMap
metadata:
...
which reflects at backend as:
...
"X-Client-Cert-Info": "O=client organization,CN=client.example.com",
"X-Forwarded-Host": "httpbin.example.com",
"X-Scheme": "https",
}
}
you can pass annotations in nginx ingress' service this
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
if you want modify header based on ingress rule then you may also add annotations to ingress rules as well.
Here is the URL:
https://landfill.bugzilla.org/bugzilla-tip/
In my code I have this:
Server server = new Server(host, port, path);
From the URL, what is host, what is port and what is path? What are the input values of the method?
Host: landfill.bugzilla.org
Port: 443 (default)
Path: bugzilla-tip
https://www.rfc-editor.org/rfc/rfc1738
Unfortunately the other answers in this question can be slightly misleading. Referring landfill.bugzilla.org to as host is correct in this specific example, but if the port was other than 443 then it would be incorrect.
https:// by default uses port 443, so you may omit it in the URL, otherwise it would of looked like this https://landfill.bugzilla.org:443/bugzilla-tip:
Protocol: https://
Hostname: landfill.bugzilla.org
Port: 443
Host: landfill.bugzilla.org or landfill.bugzilla.org:443 (depending, read below)
Hostport: landfill.bugzilla.org:443
Path: bugzilla-tip
host and hostname are not the same in all instances. For example in JavaScript location.host will return www.geeksforgeeks.org:8080 while location.hostname returns www.geeksforgeeks.org. So sometimes it's only the "same" when the default ports on the protocol are being used depending.
More info: https://www.rfc-editor.org/rfc/rfc1738
Have a look at this:
http://bl.ocks.org/abernier/3070589
Host: landfill.bugzilla.org
Port: 443 (HTTPS)
Path: /bugzilla-tip
for more details please read this
In your case with Host the code is referring to: landfill.bugzilla.org *
Port: By default the https port is 443, but you should check this.
Path: /bugzilla-tip
*Although this is theoretically not quite correct, just put it that way for simplicity.
landfill.bugzilla.org is the URL that indicates to which DNS servers it has to go to resolve the Host name, which is "landfill".
The correct answer at the configuration level is that the host is "landfill" and "landfill.bugzilla.org" is the full URL that tells you what the host is and what server you have to go to in order to find it.
Translated with www.DeepL.com/Translator (free version)