Istio Request Time out configurations for https/tls calls - istio

We are looking to leverage existing Istio functionality to configure request time-outs. Our Microservice is in the service mesh, and it makes a https call to external system. Is it possible to configure the timeouts for https calls?
We found this documentation of Istio , but the examples here are only HTTP :
Istio Request Timeouts

Well when you use HTTPS/TLS, traffic is encrypted outside of Istio and therefore much Istio functionality is not available.
You could define a Service Entry and use TLS origination to let Istio do the upgrade, then you could also use a request timeout. See this.

Related

Application Load Balancer Authoriation Header not passed through

I currently have an API on API Gateway (REST) that has a single proxy endpoint hooked up to an HTTP proxy integration. I have a cognito authorizer that authorizes incoming JWTs issued by Cognito and then if valid it forwards the request along to our ECS instance via an Application Load Balancer.
The project that is running in that instance requires the Authorization header to be there for authorization purposes. The problem is that header is not forwarded to the container. After much debugging, we determined that the header was going missing when the ALB isforwarding the request to the container (previously this question was asking about API Gateway because I assumed that's where things were going wrong). Other custom headers can go through but not "Authorization".
Does anyone have any experience persisting the Authorization header using ALB? I'm very new to ALB so still learning how to build these projects.
If you're passing a header Authorization, it will be remapped with X-Amzn-Remapped-Authorization by Amazon API Gateway REST APIs.
For more information, see this guide.
We actually had two rules on the alb. One redirecting the api call from port 80 to port 443, then a forward rule to the container. We discovered that the header went missing at the redirect rule, so we eliminated that and added listener on port 80 that forwarded the call to the ecs task.

HTTPS routing is too slow with aws fargate

I try to serve Django app on aws fargate in https.
I connected my fargate service with network loadbalancers which use secure tcp connection certificated by ACM. And then I configured route 53 record set to connect load-balacer as alias target which made https connection possible.
It made my https connection possible however it is too slow to use this api in production. It is wokring much more slowly than http requests made with DNS name of loadbalancer. It seems like I have some problem between loadbalancers and route 53 setting but I don't know how to figure this out?
Generally there is no real difference between http and https requests. Could you post you results of http vs https requests. Maybe test it through jmeter while running one fargate service through http, and another through https which the same version of the app running in both places.
Once you get your results, put logs in the tasks to see how fast the actual each request is processed in server side so you'll know for sure which one is faster and slower. It would be a lot more easier for us to help if we had that information.

Service Routing in Kubernete using Istio based on JWT token

I would like to use istio in my kubernetes cluster for routing. My use case is I have 3 service running in my cluster - A, B, and C. I would like to route my traffic to these services based on some value in JWT token. Is it doable using istio ?
I found the following Github issues #3763, #8444 might be relevant to your initial request, and based on the contributors comments, the feature of routing network traffic with JWT claims is not expected in further Istio Mixer adapter development.
However, I assume that you can configure Envoy HTTP filters in order to fetch JWT token from HTTP header and use match option for RequirementRule and apply some Lua script that will afford routing functionality. The other way would be using intermediate proxy server like NGINX Plus which has content-based routing with JWTs out the box within NGINX Ingress Controller for Kubernetes.

Can I configure multiple certificates on my GKE/Istio Gateway?

I am using the prepackaged Istio on GKE, which comes with a pre-configured ingress gateway that takes a single SSL certificate.
Is there a way to add additional certificates to Google's standard configuration which will survive reset by their configuration tool and persist through upgrades?
The Istio docs describe how to specify multiple certificates if installing the ingress gateway yourself. I could do this if I configured a separate ingress gateway, but would like to use the default one if I could. Google's docs do not list certificates as a modifiable property.
I found a post from Medium which explains how to use multiple Certificates for Istio through Cert-Manager, Let's Encrypt for TLS and Certificate Merge.
Could you please take a look at the post and let me know if it's useful?

HTTP to HTTPS redirect in backend behind GCLB

To my knowledge Google Cloud Load Balancer is not supporting HTTP to HTTPS redirect out of the box and it's a known issue: https://issuetracker.google.com/issues/35904733
Currently, I'm sending certain requests to GKE backend where I run Kubernetes apps and I have GCS-backed backends. I'm also using Apache in the default backend where I force HTTPS.
Problem with this approach is that, if any request match the criteria for GKE backend, I have no way to force HTTPS. I'm thinking to use Apache backend for all requests (?) and somehow proxy some of them to GKE backend. This way Apache backend becomes a bottleneck and I'm not sure if it's a good solution at all.
How would you approach this problem? Thanks in advance!
Seems that the only way is to send HTTP traffic to custom backend (it can be apache/nginx) and force the HTTPS upgrade there.
I find this answer useful if you're using GKE backend with an Ingress.
How to force SSL for Kubernetes Ingress on GKE
To force SSL traffic from Load Balancer to GKE backend (pod), you need to expose port 443 (or similar) on the pod and configure SSL there.