How to hide Django Admin from the public on Azure Kubernetes Service while keeping access via backdoor - django

I'm running a Django app on Azure Kubernetes Service and, for security purposes, would like to do the following:
Completely block off the admin portal from the public (e.g. average Joe cannot reach mysite.com/admin)
Allow access through some backdoor (e.g. a private network, jump host, etc.)
One scenario would be to run two completely separate services: 1) the main API part of the app which is just the primary codebase with the admin disabled. This is served publicly. and 2) Private site behind some firewall which has admin enabled. Each could be on a different cluster with a different FQDN but all connect to the same datastore. This is definitely overkill - there must be a way to keep everything within the cluster.
I'm think there might be a way to configure the Azure networking layer to block/allow traffic from specific IP ranges, and do it on a per-endpoint basis (e.g. mysite.com/admin versus mysite.com/api/1/test). Alternatively, maybe this is doable on a per-subdomain level (e.g. api.mysite.com/anything versus admin.mysite.com/anything).
This might also be doable at the Kubernetes ingress layer but I can't figure out how.
What is the easiest way to satisfy the 2 requirements?

You can manage restriction at ingress level :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/whitelist-source-range: "192.168.0.XXX, 192.175.2.XXX"
name: staging-ingress
namespace: default
spec:
rules:
- host: test.example.io
http:
paths:
- backend:
serviceName: service-name
servicePort: 80
tls:
- hosts:
- test.example.io
secretName: tls-cert
You can white list the IP address for allowing specific path to resolve your backdoor issue. For other you can create another ingress rule with removing annotation for public accesss.
For a particular path :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/whitelist-source-range: "192.168.0.XXX, 192.175.2.XXX"
name: staging-ingress
namespace: default
spec:
rules:
- host: test.example.io
http:
paths:
- path : /admin
backend:
serviceName: service-name
servicePort: 80
tls:
- hosts:
- test.example.io
secretName: tls-cert
test.example.io/admin will only be accessible through source-range.

Related

Istio virtual service spec host and destination rule host

I'm trying to understand Istio configuration model but the more I read the more I get confused, especially around the hosts and host fields. In their examples, they all use the same short name and I'm not sure whether they mean the virtual service name, the Kubernetes service hostname or the dns service address.
Assuming I have the following configuration:
My Kubernetees project namespace is called poc-my-ns
Inside poc-my-ns I have my pods (both version 1 and 2) a Kubernetes route and a Kubernetes service.
The service hostname is: poc-my-ns.svc.cluster.local and the route is https://poc-my-ns.orgdevcloudapps911.myorg.org.
Everything is up and running and the service selector gets all pods from all versions as it should. (Istio virtual service suppose to do the final selection by version).
The intended Istio configuration looks like that:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: poc-my-dr
spec:
host: poc-my-ns.svc.cluster.local # ???
subsets:
- name: v1
labels:
version: 1.0
- name: v2
labels:
version: 2.0
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: poc-my-vs
spec:
hosts:
- poc-my-ns.svc.cluster.local # ???
http:
- route:
- destination:
host: poc-my-dr # ???
subset: v1
weight: 70
- destination:
host: poc-my-dr # ???
subset: v2
weight: 30
My questions are:
Is the destination rule spec/host refers to the Kubernetes service hostname?
Is the virtual service spec/hosts refers to the Kubernetes service hostname, Is it the route https://poc-my-ns.orgdevcloudapps911.myorg.org or something else?
Is the virtual service spec/http/route/destination/host refers to the destination rule name or does it suppose to point to the Kubernetes service hostname or should it be the virtual service metadata/name?
I will really appreciate clarifications.
The VirtualService and DestinationRule basically configure the envoy-proxy of the istio mesh. The VirtualService defines where to route the traffic to and the DestinationRule defines what to additionally do with the traffic.
For the VS the spec.hosts list can contain kubernetes internal and external hosts.
Say you want the define how to route traffic for api.example.com coming from outside the kubernetes cluster through the istio-ingressgateway my-gateway into the mesh. It should be routed to the rating app in the store namespace, so the VS would look like this:
spec:
hosts:
- api.example.com # external host
gateway:
- my-gateway # the ingress-gateway
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # kubernetes service
If you want to define how cluster/mesh internal traffic is routed, you set rating.store.svc.cluster.local in the spec.hosts list and define the mesh gateway (or leave it out like you did, because mesh is the default) and route it to the rating.store.svc.cluster.local service. You also add a DR where you define subsets and route all mesh internal traffic to subset v1.
# VS
[...]
spec:
hosts:
- rating.store.svc.cluster.local # cluster internal host
gateway:
- mesh # mesh internal gateway (default when omitted)
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # cluster internal host
subset: v1 # defined in destinationrule below
---
[...]
spec:
host: rating.store.svc.cluster.local # cluster internal host
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
But it could also be that you want to route traffic to a cluster external destination. In that case destination.host would be an external fqdn, like in this example from docs:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-wikipedia
spec:
hosts:
- wikipedia.org
location: MESH_EXTERNAL
ports:
- number: 80
name: example-http
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-wiki-rule
spec:
hosts:
- wikipedia.org
http:
- timeout: 5s
route:
- destination:
host: wikipedia.org
Think about it as "I want to route traffic from HOST_FROM to HOST_TO", where
HOST_FROM is spec.host and spec.hosts
HOST_TO is destination.host
and both can be inside the kubernetes cluster or outside.
So to answer all your questions:
It depends: If you want to route from/to cluster internal traffic you'll use a kubernetes service fqdn. For cluster external traffic you'll use the external target fqdn.
I highly recommend reading through the docs of VirtualService and DestinationRule where you can see several examples with explanations.

Exposing a service on EKS using NGINX ingress and issues with load balancer

I am trying to set up a service and expose it externally on EKS. I have already done it on GKE pretty easily but now AWS is giving me a hard time.
My NGINX yaml looks something like that:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- app.mydomain.com
secretName: myapp-tls
rules:
- host: app.mydomain.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
And then I have my domain app.mydomain.com on Google Domains pointing at the ingress external address. There is also a cert-manager service running in order to support HTTPS.
However, while basically the same setup worked completely out of the box on GKE, EKS gives me a hard time.
From what I understand it has something to do with EKS default LoadBalancer being layer 4 in comparison to Google's layer 7 (Which explains HTTPS not working) but there is also issues with redirections of the domain as it just resolves as the ingress address instead of my desired address and thus my app doesn't show up.
The domain is registered over Google Domains and I'm creating Synthetic Records (for my subdomain) that points to my ingress external address on EKS. The same scheme works perfectly fine on GKE but here it resolves the address as the ingress address instead of my domain which results in 404 on the ingress side.
I was wondering if someone could please point me to how to properly set it up? Should I give up on nginx ingress on EKS and move onto ALB? and how to properly associate the domain?
Thank you very much in advance!
Edit:
output of kubectl describe ingress myapp-ingress:
Name: myapp-ingress
Namespace: default
Address: ********************************-****************.elb.eu-west-1.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
myapp-tls terminates app.mydomain.com
Rules:
Host Path Backends
---- ---- --------
app.mydomain.com
/ myapp-service:80 (172.31.2.238:8000)
Annotations: cert-manager.io/cluster-issuer: myapp-letsencrypt-prod
kubernetes.io/ingress.class: nginx
Events: <none>
Should I give up on nginx ingress on EKS and move onto ALB
No. NGinX ingress controllers work perfectly well on EKS. It is possible to configure them as either layer 4 or layer 7; we use it in layer 7 mode.
Can you update your question with the output of
kubectl get ingress myapp-ingress
I think your ingress path is also incorrect. Unless I'm mistaken that's just routing the root of your app, not all uris. We use the scheme
spec:
rules:
- host: service.d.tld
http:
paths:
- path: /?(.*) # <---
backend:
serviceName: my-service
servicePort: http
Are you seeing errors in the nginx ingress controller's logs? That + kubectl events are both useful for debugging purposes.
I'd disable TLS everywhere and get your service working on http, then work stepwise on getting TLS enabled on the ingress controller.
Edit: Based on your response above,
curl -H "Host: app.mydomain.com" http://<elb-address>:80
SHOULD call through to your service behind the ingress.
How is app.mydomain.com defined? Is it a CNAME to the dns entry?

Ingress controller does not show external IP

I have been trying to create a kubernetes cluster on Google kubernetes Engine. My pods are sucessfully running but the problem is with the ingress controller. The ingress conroller is not showing the external IP to access the application.
And the YAML file for nginx ingress controller looks like this :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: http-ingress
labels:
app: ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nodeapp1svc
servicePort: 80
- path: /app1
backend:
serviceName: nodeapp2svc
servicePort: 80
- path: /app2
backend:
serviceName: nodeapp2svc
servicePort: 80
What can I do next?
It looks like the problem is related with your annotations, specifically with this one:
kubernetes.io/ingress.class: addon-http-application-routing
The ingress.class you're trying to use is something specific to Azure AKS so definitely you cannot use it on your GKE Cluster.
Note that you can omit kubernetes.io/ingress.class annotation at all if you want your default GKE Ingress controller - ingress-gce to be used.
I tested it on my GKE cluster and without the mentioned above annotation it works just fine.
As to your specific setup, I noticed one more problem, namely your nodeapp[1-3]svc Services are of a type ClusterIP and they need to be either NodePort or LoadBalancer.
If you run:
kubectl describe ingress http-ingress
and take a look at the events section, you may encounter the error message like the one below:
loadbalancer-controller error while evaluating the ingress spec: service "default/nodeapp1svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"
Summary:
use the correct ingress.class i.e. omit this annotation at all and the default ingress controller will be used.
make sure your backends are exposed via NodePort rather than ClusterIP.

NGINX Ingress Controller for multiple service for branch wise deployment

I my case, I have a branch wise deployment in EKS 1.14 and I want to handle this with "regex" & Nginx ingress.
Scenario:- Let's say I have Branch B1 with service_A(apache service), Similarly under B2 with service_A((apache service) and so on and want to access the service via URL like:- apache-{branch_name}.example.com
Note :- Branch B1/B2 is nothing but unique namespaces where same kind of service is running.
I need single ingress from where I can control all different branch URL
My example file:-
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: regex-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- '*.k8s.example.com'
secretName: prod-crt
rules:
- host: {service_A-b1}.k8s.acko.in
http:
paths:
- backend:
serviceName: {service_A-b1}
servicePort: 80
- host: {service_A-b2}.k8s.acko.in
http:
paths:
- backend:
serviceName: {service_A-b2}
servicePort: 80
Nginx ingress don't work in this way, is not possible to have regex in serviceName neither host.
From NGINX docs:
Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used.
You can use regex only in path field:
The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. This can be enabled by setting the nginx.ingress.kubernetes.io/use-regex annotation to true (the default is false).
If you need to control your serviceName and host dinamically I strong recommend use some kind of automation (could be jenkins, bash script etc...) or templates by HELM which will modify at deployment time.
References:
https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/

how to add more than one service to ingress with url maps?

Hi I have four microservices running and i want to use one ingress lb for all of those.
Problem here is my ingress is working for only one microservice. but my application has some url like index.html. which means I have to access http:///index.html
If I access http:/// (it shows white label page error)
when I am using url-maps with path as path1 and I am trying to access http:///path1 (it shows white label page error). which means backend are working.But when I am try to access http:///path1/index.html it shows backend not found.
I need to know how to use url-maps in this case. Kindly help me out
Here is an example extracted from the Kubernetes documentation [1] that creates 1 ingress load balancer that points to diferent backend services:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
You can add as many back end services as you need.
[1] https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout