Enbale HSTS when using Istio in AWS EKS - amazon-web-services

I have installed istio-gateway using helm charts in AWS EKS. I am able to reach application using AWS alb and gateway. Configured virtualservice for traffic route.
For security purpose I tried enabling HSTS but its not enabling it and I could see HSTS headers in browser too.
Below is the virtualservice config I have used
http:
- match:
- uri:
prefix: /
route:
- destination:
host: serviceA
port:
number: 80
headers:
response:
set:
Strict-Transport-Security: max-age=31536000; includeSubDomains

Related

nginx ingress redirect to https based on header

I have a EKS setup where traffic is sent in the following way.
Users -> Cloudfront -> ALB -> EKS. EKS has an NginX ingress controller.
Currently "force-ssl-redirect" is enabled and hence, NginX ingress controller redirects all HTTP traffic to HTTPS.
I want Cloudfront to connect with ALB using HTTP. Hence, I am looking to have a conditional HTTPS redirect in NginX controller.
Hence,
I will set a custom header to requests in from Cloudfront
If this new header is found, I want NginX controller to return the correct response in HTTP. If not, I want to redirect to HTTPS.
How can this be done?
I assume that you would like to add a custom header on Cloudfront, and then do a redirect on the Nginx side.
You can add custom headers to the request via Cloudfront Functions - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html
Then I think you need to use the nginx.ingress.kubernetes.io/configuration-snippet annotation in the Ingress Kubernetes resource to add custom configuration to the Nginx location. Maybe something like this can work:
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: redirect
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_x_custom_header) {
return 301 http://$host$request_uri;
}
return 301 https://$host$request_uri;
spec:
rules:
- host: ...
You can use the NginX configuration map for conditional HTTPS redirection.
You need to create Nginx Configuration with a custom snippet for checking the custom header set by CloudFront.
apiVersion: v1
Kind: ConfigMap
Metadata:
Name: nginx-config
Data:
Ssl-redirect-snippet: |
If ($http_cf_custom_header) {
Return 301 http://$server_name$request_uri;
}
Return 301 https://$server_name$request_uri;
If header is present, NginX returns a 301 redirect with HTTP; if it is not present it will redirect to same URL using HTTPS
Now you need to add configuration map to your NginX ingress controller
apiVersion: networking.k8s.io/v1
Kind: Ingress
Metadata:
Name: my-ingress
Annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
Include /etc/nginx/ssl-redirect-snippet;
Spec:
Rules:
-host: my.domain.com
Http:
Paths:
-Path: /
pathType: Prefix
Backend:
Service:
Name: my-service
Port:
Name: http
You need to set CLoudFront to include ‘http_cf_custom_header’ when you are forwarding to ALB. Check this official page and for further information check AWS official documentation.

How to remove or modify header from istio ingress gateway

Chrome browser redirects all my domain and subdomain requests to HTTPS, this is unwanted behavior in my case.
according to https://www.chromium.org/hsts, this is HSTS policy that been added to chrome browser to the domain and all subdomains.
I am using Istio version 1.7.4 and noticed that the Istio ingress gateway add the header strict-transport-security that causes this issue.
strict-transport-security: max-age=15552000; includeSubDomains
how can I remove this header from the ingress gateway?
You can use VirtualService to add or remove certain headers.
The example from the official Istio documentation shows the way how you can remove it:
Headers
Message headers can be manipulated when Envoy forwards requests to, or responses from, a destination service. Header manipulation rules can be specified for a specific route destination or for all destinations. The following VirtualService adds a test header with the value true to requests that are routed to any reviews service destination. It also removes the foo response header, but only from responses coming from the v1 subset (version) of the reviews service.
v1alpha3
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- headers:
request:
set:
test: true
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: v2
weight: 25
- destination:
host: reviews.prod.svc.cluster.local
subset: v1
headers:
response:
remove:
- foo # <-- HERE!
weight: 75
Istio.io: Latest: Docs: Reference: Config: Networking: Virtual service: Headers
Additional resources:
Istio.io: Latest: Docs

Istio - Terminate TLS at AWS NLB

I'm using EKS and latest Istio installed via Helm. I'm trying to implement TLS based on a wildcard cert we have for our domain in AWS certificate manager. I'm running into a problem where the connection between the client and the NLB works, with TLS being terminated there, but the NLB can't talk to the istio LB over the secure port. In the AWS console I can rewrite the forwarding rules to forward traffic from port 443 to the standard istio http target, but I can't find a way to do this via code. I'm trying to avoid all click-ops. Here is my Helm overrides for the gateway:
gateways:
istio-ingressgateway:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:XXXXXXXXXXXXXXXXXX:certificate/XXXXXXXXXXXXXXXXXXXX"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
So what I'm expecting to occur here is:
Client:443 --> NLB:443 --> Istio Gateway:80
but what I end up with is
Client:443 --> NLB:443 --> Istio Gateway:443
Does anyone have any thoughts on how to get this to work via code? Alternately if someone can point me to a resource to get tls communication between the NLB and Istio working I'm good with that too.
Probably, what is happening is that if you terminate TLS on the load balancer it won't carry SNI to the target group. I had the exact same issue and I ended up solving it by setting the host as '*' on the ingress Gateway and then specifying the hosts on the different VirtualServices (as recommended here and also on istio's official docs).
Your service annotation already correct what is mising is to change istio gateway port 443 to HTTP
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: http-gateway-external
namespace: istio-ingress
spec:
selector:
istio: gateway-external
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
- hosts:
- '*'
port:
name: https
number: 443
protocol: HTTP # Change from HTTPS to HTTP

What's the purpose of the `VirtualService` in this example?

I am looking at this example of Istio, and they are craeting a ServiceEntry and a VirtualService to access the external service, but I don't understand why are they creating a VirtualService as well.
So, this is the ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: edition-cnn-com
spec:
hosts:
- edition.cnn.com
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: https-port
protocol: HTTPS
resolution: DNS
With just this object, if I try to curl edition.cnn.com, I get 200:
/ # curl edition.cnn.com -IL 2>/dev/null | grep HTTP
HTTP/1.1 301 Moved Permanently
HTTP/1.1 200 OK
While I can't access other services:
/ # curl google.com -IL
HTTP/1.1 502 Bad Gateway
location: http://google.com/
date: Fri, 10 Jan 2020 10:12:45 GMT
server: envoy
transfer-encoding: chunked
But in the example they create this VirtualService as well.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: edition-cnn-com
spec:
hosts:
- edition.cnn.com
tls:
- match:
- port: 443
sni_hosts:
- edition.cnn.com
route:
- destination:
host: edition.cnn.com
port:
number: 443
weight: 100
What's the purpose of the VirtualService in this scenario?.
The VirtualService object is basically an abstract pilot resource that modifies envoy filter.
So creating VirtualService is a way of modification of envoy and its main purpose is like answering the question: "for a name, how do I route to backends?"
VirtualService can also be bound to Gateway.
In Your case lack of VirtualService results in lack of modification of the envoy from the default/global configuration. That means that the default configuration was enough for this case to work correctly.
So the Gateway which was used was most likely default. With same protocol and port that you requested with curl which all matched Your ServiceEntry requirements for connectivity.
This is also mentioned in istio documentation:
Virtual
services,
along with destination
rules,
are the key building blocks of Istio’s traffic routing functionality.
A virtual service lets you configure how requests are routed to a
service within an Istio service mesh, building on the basic
connectivity and discovery provided by Istio and your platform. Each
virtual service consists of a set of routing rules that are evaluated
in order, letting Istio match each given request to the virtual
service to a specific real destination within the mesh. Your mesh can
require multiple virtual services or none depending on your use case.
You can use VirtualService to add thing like timeout to the connection like in this example.
You can check the routes for Your service with the following command from istio documentation istioctl proxy-config routes <pod-name[.namespace]>
For bookinfo productpage demo app it is:
istioctl pc routes $(kubectl get pod -l app=productpage -o jsonpath='{.items[0].metadata.name}') --name 9080 -o json
This way You can check how routes look without VirtualService object.
Hope this helps You in understanding istio.
The VirtualService is not really doing anything, but as the docs say:
creating a VirtualService with a default route for every service, right from the start, is generally considered a best practice in Istio
The ServiceEntry adds the CNN site as an entry to Istio’s internal service registry, so auto-discovered services in the mesh can route to these manually specified services.
Usually that's used to allow monitoring and other Istio features of external services from the start, whereas the VirtualService would allow the proper routing of request (basically traffic management).
This page in the docs gives a bit more background info on using ServiceEntries and VirtualServices, but basically the ServiceEntry makes sure your mesh knows about the service and can monitor it, and the VirtualService controls what traffic is going to the service, which in this case is all of it.

eks http https redirect using ingress

This is my ingress file , what I need is how to add https redirection settings here in ingress file , I did it using service file and it works but after to reduce costs I decided to use SINGLE ingress file which manage multiple services with SINGLE AWS CLASSIC load balancer.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
generation: 4
name: brain-xx
namespace: xx
spec:
rules:
- host: app.xx.com
http:
paths:
- backend:
serviceName: xx-frontend-service
servicePort: 443
path: /
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xx
I have managed to create http to https redirection on GKE. Let me know if this solution will work for your case on AWS:
Steps to reproduce
Apply Ingress definitions
Configure basic HTTP ingress resource
Create SSL certificate
Replace old Ingress resource with HTTPS enabled one.
Apply Ingress definitions
Follow this Ingress link to check if there are any needed prerequisites before installing NGINX Ingress controller on your AWS infrastructure and install it.
Configure basic HTTP ingress resource and test it
Example below is Ingress configuration with HTTP traffic only.
It will act as starting point:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-http
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: xx.yy.zz
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
- path: /v2/
backend:
serviceName: goodbye-service
servicePort: goodbye-port
Please change this file to reflect configuration appropriate to your case.
Create SSL certificate
For this to work without browser's security warnings you will need valid SSL certificate and a domain name.
To create this certificate you can use for example: Linode create Let's Encrypt SSL certificates.
Let's Encrypt will create files which will be used later.
Configure HTTPS ingress resource and test it
By default Nginx Ingress will create a self-signed certificate if he's not provided one. To provide him one you will need to add it as a secret to your Kubernetes cluster.
As I said earlier the files (cert.pem privkey.pem) that Let's Encrypt created will be added to Kubernetes to configure HTTPS.
Below command will use this files to create secret for Ingress:
$ kubectl create secret tls ssl-certificate --cert cert.pem --key privkey.pem
This Ingress configuration support HTTPS as well as redirects all the traffic to it:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-https
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: ssl-certificate
rules:
- host: xx.yy.zz
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
- path: /v2/
backend:
serviceName: goodbye-service
servicePort: goodbye-port
Please change this file to reflect configuration appropriate to your case.
Take a look at this fragment which will enable HTTPS and redirect all the traffic to it:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: ssl-certificate
Apply this configuration and check if it worked for you.
Below is part of curl output which shows that connecting to http://xx.yy.zz gives redirection to https://xx.yy.zz
< HTTP/1.1 308 Permanent Redirect
< Server: openresty/1.15.8.2
< Date: Fri, 20 Dec 2019 15:06:57 GMT
< Content-Type: text/html
< Content-Length: 177
< Connection: keep-alive
< Location: https://xx.yy.zz/