Istio Traffic Routing deny by match prefix - istio

Is there a way to deny traffic to a specific prefix so only internal traffic is allowed?
Example from here
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- bookinfo.com
gateways:
- bookinfo-gateway
http:
- match:
- uri:
prefix: /v2 # <---- route all traffic to /v2*
route:
...
- match:
- uri:
prefix: /v2/internal # <---- but deny all traffic to /v2/internal*

I see two ways to achieve this:
Direct the traffic intended to /v2/internal* to 127.0.0.1
Use Policy blacklist adapter to deny requests to /v2/internal*, see this. Use request.path attribute as the value of the listentry.

Related

Kubernetes ingress does not create rules in Alb/EKS when it has wildcard

My post is to bring some details about a problem I was facing and the details of how I arrived at a solution, in case anyone else is experiencing something similar.
FACT
Kubernetes ingress does not create rules in Alb/EKS when it has wildcard.
When trying to create overlapping paths in Kubernetes Ingress, requests are not routing as expected.
Ingress cannot create paths with Wildcard.
CAUSE
The wildcard rule is not created in the ALB on AWS.
When trying to wildcard the Ingress path in Kubernetes, errors occur in ingress / aws-load-balancer-controller.
Ingress Logs:
`
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedBuildModel 53s (x974 over 11d) ingress Failed build model due to ingress: api-checkout/api-checkout-develop: prefix path shouldn't contain wildcards: /teste/*
`
aws-load-balancer-controller logs:
kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller
{"level":"error","ts":1669761828.7741847,"logger":"controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"api-checkout-max -shopify-develop","namespace":"api-checkout-max-shopify-develop","error":"ingress: api-checkout-max-shopify-develop/api-checkout-max-shopify-develop: prefix path shouldn't contain wildcards: /shopify/*"}
It is only possible to create rules in the ALB containing wildcard, through the AWS Console.
ACTION
Instead of using "pathType: Prefix", use "pathType: ImplementationSpecific" when creating paths via yaml manifest in Kubernetes.
This way the ingress can create the proper rules (even with Wildcard) in the Load Balancer / ALB in AWS.
Source:
https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
ImplementationSpecific: With this path type, matching is up to the IngressClass. Implementations can treat this as a separate pathType or treat it identically to Prefix or Exact path types.
EXAMPLE:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: $K8S_INGRESS_NAME_NGINX
namespace: $K8S_NAMESPACE
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/certificate-arn: $INGRESS_CERT_ARN,$INGRESS_CERT_SUBDOMAIN_ARN
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
rules:
- host: $URL
http:
paths:
- pathType: ImplementationSpecific
path: "/teste"
backend:
service:
name: $K8S_DEPLOY_NAME_NGINX_FRONTEND
port:
number: $K8S_SERVICE_PORT_NGINX_FRONTEND
- pathType: ImplementationSpecific
path: "/teste/*"
backend:
service:
name: $K8S_DEPLOY_NAME_NGINX_BACKEND
port:
number: $K8S_SERVICE_PORT_NGINX_BACKEND
- pathType: Prefix
path: "/checkout"
backend:
service:
name: $K8S_DEPLOY_NAME_NGINX_FRONTEND
port:
number: $K8S_SERVICE_PORT_NGINX_FRONTEND
path: "/"
backend:
service:
name: $K8S_DEPLOY_NAME_NGINX_BACKEND
port:
number: $K8S_SERVICE_PORT_NGINX_BACKEND
When trying to create overlapping paths in Kubernetes Ingress, requests are not routing as expected.
Ingress cannot create paths with Wildcard.

istio virtualservice url rewrite

I'm trying to wrap my head virtual service url rewrites. This is what I have
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: istio-test
spec:
hosts:
- "*"
gateways:
- istio-test-gateway
http:
- name: "pingpongservice"
match:
- uri:
prefix: /pingpongservice
rewrite:
uri: /
route:
- destination:
host: istio-service-test.default.svc.cluster.local
now when I try a curl with the url http://host:port/pingpongservice I get back the proper response. When I try something like http://host:port/pingpongservice/ping I get the following error
Moved Permanently.
The ping endpoint actually exists in the service I have deployed, so I'm not sure why I'm getting this response back.
I'm using istio on minikube

istio: VirtualService url rewriting or forwarding

I have an Istio VirtualService with a match and a route and redirect url defined as follows:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-pro
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: /events
route:
- destination:
host: event-service
port:
number: 8000
- match:
- uri:
prefix: /blog
redirect:
uri: /
authority: blog.mydomain.com
- route:
- destination:
host: default-service
port:
number: 8000
this VirtualService work as follows:
if the request is www.mydomain.com/events it will forward to event-service.
if the request is www.mydomain.com/blog it will redirect host to blog.mydomain.com.
if the request is www.mydomain.com/anyother it will forward to default-service.
In case no.2 I am redirecting www.mydomain.com/blog to blog.mydomain.com page because my blog page is hosted on that domain.
now my problem is while redirecting the URL, the browser URL is changing to blog.mydomain.com. I want it to remain the same www.mydomain.com/blog but the content of blog.mydomain.com should be display on the screen.
I think you should use rewrite with a destination : https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRewrite
If the destination is external to the Service Mesh, you'll also need a ServiceEntry
- match:
- uri:
prefix: /blog
name: blog.mydomain.com
rewrite:
authority: blog.mydomain.com
uri: /blog
route:
- destination:
host: blog.mydomain.com
Add the above rule in the virtual service, then create this service entry.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: blog
spec:
hosts:
- blog.mydomain.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS

In AWS EKS, how can I define ingress to use one ALB for multiple subdomain URLs, each with their own certificate?

I have multiple services that need to be exposed to the internet, but I'd like to use a single ALB for them.
I am using the latest AWS Load Balancer Controller, and I've been reading the documentation here (https://kubernetes-sigs.github.io/aws-load-balancer-controller/guide/ingress/annotations/#traffic-routing), but I haven't found a clear explanation on how to achieve this.
Here's the setup:
I have service-a.example.com -and- service-b.example.com. They each have their own certificates within Amazon Certificate Manager.
Within Kubernetes, each has its own service object defined as follows (each unique):
apiVersion: v1
kind: Service
metadata:
name: svc-a-service
annotations:
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthy-threshold-count: '5'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
alb.ingress.kubernetes.io/healthcheck-path: /index.html
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '30'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/tags: Environment=Test,App=ServiceA
spec:
selector:
app: service-a
ports:
- port: 80
targetPort: 80
type: NodePort
And each service has it's own Ingress object defined as follows (again, unique to each and with the correct certificates specified for each service):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: svc-a-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: services
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-01234567898765432
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/actions.response-503: >
{"type":"fixed-response","fixedResponseConfig":{"contentType":"text/plain","statusCode":"503","messageBody":"Unknown Host"}}
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true,idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/tags: Environment=Test
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:555555555555:certificate/33333333-2222-4444-AAAA-EEEEEEEEEEEE
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-2016-08
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: svc-a-service
servicePort: 80
- path: /*
backend:
serviceName: response-503
servicePort: use-annotation
The HTTP to HTTPS redirection works as expected.
However -- there is no differentiation between my two apps for the load balancer to be able to know that traffic destined for service-a.example.com and service-b.example.com should be routed to two different target groups.
In the HTTP:443 listener rules in the console, it shows:
IF Path is /* THEN Forward to ServiceATargetGroup
IF Path is /* THEN Return fixed 503
IF Path is /* THEN Forward to ServiceBTargetGroup
IF Path is /* THEN Return fixed 503
IF Request otherwise not routed THEN Return fixed 404
So the important question here is:
How should the ingress be defined to force traffic destined for service-a.example.com to ServiceATargetGroup - and traffic destined for service-b.example.com to ServiceBTargetGroup?
And secondarily, I need the "otherwise not routed" to return a 503 instead of 404. I was expecting this to appear only once in the rules (be merged) - yet it is created for each ingress. How should my yaml be structured to achieve this?
I eventually figured this out -- so for anyone else stumbling onto this post, here's how I resolved it:
The trick was not relying on merging between the Ingress objects. Yes, it can handle a certain degree of merging, but there's not really a one-to-one relationship between Services as TargetGroups and Ingress as ALB. So you have to be very cautious and aware of what's in each Ingress object.
Once I combined all of my ingress into a single object definition, I was able to get it working exactly as I wanted with the following YAML:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: svc-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: services
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-01234567898765432
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/actions.response-503: >
{"type":"fixed-response","fixedResponseConfig":{"contentType":"text/plain","statusCode":"503","messageBody":"Unknown Host"}}
alb.ingress.kubernetes.io/actions.svc-a-host: >
{"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"svc-a-service","servicePort":80,"weight":100}]}}
alb.ingress.kubernetes.io/conditions.svc-a-host: >
[{"field":"host-header","hostHeaderConfig":{"values":["svc-a.example.com"]}}]
alb.ingress.kubernetes.io/actions.svc-b-host: >
{"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"svc-b-service","servicePort":80,"weight":100}]}}
alb.ingress.kubernetes.io/conditions.svc-b-host: >
[{"field":"host-header","hostHeaderConfig":{"values":["svc-b.example.com"]}}]
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true,idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/tags: Environment=Test
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:555555555555:certificate/33333333-2222-4444-AAAA-EEEEEEEEEEEE,arn:aws:acm:us-east-2:555555555555:certificate/44444444-3333-5555-BBBB-FFFFFFFFFFFF
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-2016-08
spec:
backend:
serviceName: response-503
servicePort: use-annotation
rules:
- http:
paths:
- backend:
serviceName: ssl-redirect
servicePort: use-annotation
- backend:
serviceName: svc-a-host
servicePort: use-annotation
- backend:
serviceName: svc-b-host
servicePort: use-annotation
Default Action:
Set by specifying the serviceName and servicePort directly under spec:
spec:
backend:
serviceName: response-503
servicePort: use-annotation
Routing:
Because I'm using subdomains and paths won't work for me, I simply omitted the path and instead relied on hostname as a condition.
metadata:
alb.ingress.kubernetes.io/actions.svc-a-host: >
{"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"svc-a-service","servicePort":80,"weight":100}]}}
alb.ingress.kubernetes.io/conditions.svc-a-host: >
[{"field":"host-header","hostHeaderConfig":{"values":["svc-a.example.com"]}}]
End Result:
The ALB rules were configured precisely how I wanted them:
default action is a 503 fixed response
all http traffic is redirected to https
traffic is directed to TargetGroups based on the host header
AWS EKS now has a notion of IngressGroups so multiple ingresses can share one ingress controller. See Application load balancing on Amazon EKS
To share an application load balancer across multiple ingress resources using IngressGroups
To join an Ingress to an Ingress group, add the following annotation to a Kubernetes Ingress resource specification.
alb.ingress.kubernetes.io/group.name: <my-group>
The group name must be:
63 characters or less in length.
Consist of lower case alphanumeric characters, -, and ., and must start and end with an alphanumeric character.
The controller will automatically merge ingress rules for all Ingresses in the same Ingress group and support them with a single ALB. Most annotations defined on an Ingress only apply to the paths defined by that Ingress. By default, Ingress resources don't belong to any Ingress group.

Redirecting traffic to external url

Updates based on comments:
Lets say there's an API hosted # hello.company1.com in another GCP Project...
I would like to have a possibility that when some1 visits a url abc.company.com they are serverd traffic from hello.company1.com something similar to an API gateway...
It could be easily done with an API gateway, I am just trying to figure out if its possible to with K8S service & ingress.
I have created a Cloud DNS zone as abc.company.com
When someone would visit abc.company.com/google I would like the request to be forwarded to an external url let's say google.com
Could this be achieved by creating a service of type external name and an ingress with host name abc.company.com
kind: Service
apiVersion: v1
metadata:
name: test-srv
spec:
type: ExternalName
externalName: google.com
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: abc.company.com
- http:
paths:
- path: /google
backend:
serviceName: test-srv
It's possible to achieve what you want, however you will need to use Nginx Ingress to do that, as you will need to use specific annotation - nginx.ingress.kubernetes.io/upstream-vhost.
It was well described in this Github issue based on storage.googleapis.com.
apiVersion: v1
kind: Service
metadata:
name: google-storage-buckets
spec:
type: ExternalName
externalName: storage.googleapis.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-assets-ingress
annotations:
kubernetes.io/ingress.class: nginx-ingress
nginx.ingress.kubernetes.io/rewrite-target: /[BUCKET_NAME]/[BUILD_SHA]
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/upstream-vhost: "storage.googleapis.com"
spec:
rules:
- host: abc.company.com
http:
paths:
- path: /your/path
backend:
serviceName: google-storage-buckets
servicePort: 443
Depends on your needs, if you would use it on non https you would need to change servicePort to 80 and remove annotation nginx.ingress.kubernetes.io/backend-protocol: "HTTPS".
For additional details, you can check other similar Stackoverflow question.
Please remember to not use - in spec.rules.host and spec.rules.http in the same manifest. You should use - only with http, if you don't have host in your configuration.