Private paths in a public API with Kubernetes - amazon-web-services

We have a microservice architecture based on Kubernetes in Amazon EKS with Ambassador as API Gateway.
We have 2 Ambassadors: 1 public and 1 private. So we have services that are only accessible by services in the cluster or VPN, and we have some services that are public.
We have the need for making private some URL paths in the public services. For example, we have a public API that is accessible in api.company.com, and we want to leave all paths public like api.company.com/createuser, api.company.com/login, etc... but for other paths we want to make them private, for example: api.company.com/swagger.html.
We know that we could enable authentication for those paths in the API, but we are looking for a solution without auth.
An example of how we configure K8s service with Ambassador for public services:
apiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: backends_mapping
prefix: /
ambassador_id: ambassador-public
service: backends.svc:8080
host: api.mycompany.com
labels:
app: backends
name: backends
namespace: svc
spec:
ports:
- name: http-backends
port: 8080
protocol: TCP
targetPort: http-api
selector:
app: backends
type: ClusterIP

Not sure what do you mean by without auth. You will need some sort of check to serve internal content.
One approach to achieve this can be(Note this is a high level overview).
You can make the service private, do not expose this service directly.
Prefix all your internal routes with say /internal/ or /private/ prefix.
So api.company.com/swagger.html becomes api.company.com/internal/swagger.html
You can create a load balancer that points to this middleware.
Middleware(public service) will intercept all the requests. I think Nginx can be used here. If the request has /internal/ path check if it satisfies the condition(origin, internal network etc).
If the check passes, redirect to private service.
If the check fails return 403 forbidden or whatever response code that fits.

Cilium can do just what you want:http://docs.cilium.io/en/stable/policy/language/#http
Basically you can specify L7 network policies which will only allow access to some of you API paths from certain pods.
Cilium project page: https://cilium.io/
Layer 7 policies example: http://docs.cilium.io/en/stable/policy/language/#http
EKS install guide: http://docs.cilium.io/en/v1.4/gettingstarted/k8s-install-eks/?highlight=eks
Disclaimer: I am part of the team that develops Cilium.

Related

How to expose multiple services with TCP using nginx-ingress controller?

I have multiple deployments running of RDP application and they all are exposed with ClusterIP service. I have nginx-ingress controller in my k8s cluster and to allow tcp I have added --tcp-services-configmap flag in nginx-ingress controller deployment and also created a configmap for the same that is shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
3389: “demo/rdp-service1:3389”
This will expose “rdp-service1” service. And I have 10 more such services which needed to be exposed on the same port number but if I add more service in the same configmap like this
...
data
3389: “demo/rdp-service1:3389”
3389: “demo/rdp-service2:3389”
Then it will remove the previous service data and since here I have also deployed external-dns in k8s, so all the records created by ingress using host: ... will starts pointing to the deployment attached with the newly added service in configmap.
Now my final requirement is as soon as I append the rule for a newly created deployment(RDP application) in the ingress then it starts allowing the TCP connection for that, so is there any way to achieve this. Or is there any other Ingress controller available that can solve such type of use case and can also easily be integrated with external-dns ?
Note:- I am using AWS EKS Cluster and Route53 with external-dns.
Posting this answer as a community wiki to explain some of the topics in the question as well as hopefully point to the solution.
Feel free to expand/edit it.
NGINX Ingress main responsibility is to forward the HTTP/HTTPS traffic. With the addition of the tcp-services/udp-services it can also forward the TCP/UDP traffic to their respective endpoints:
Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp udp services
The main issue is that the Host based routing for Ingress resource in Kubernetes is targeting specifically HTTP/HTTPS traffic and not TCP (RDP).
You could achieve a following scenario:
Ingress controller:
3389 - RDP Deployment #1
3390 - RDP Deployment #2
3391 - RDP Deployment #3
Where there would be no Host based routing. It would be more like port-forwarding.
A side note!
This setup would also depend on the ability of the LoadBalancer to allocate ports (which could be limited due to cloud provider specification)
As for possible solution which could be not so straight-forward I would take a look on following resources:
Stackoverflow.com: Questions: Nxing TCP forwarding based on hostname
Doc.traefik.io: Traefik: Routing: Routers: Configuring TCP routers
Github.com: Bolkedebruin: Rdpgw
I'd also check following links:
Aws.amazon.con: Quickstart: Architecture: Rd gateway - AWS specific
Docs.konghq.com: Kubernetes ingress controller: 1.2.X: Guides: Using tcpingress
Haproxy:
Haproxy.com: Documentation: Aloha: 12-0: Deployment guides: Remote desktop: RDP gateway
Haproxy.com: Documentation: Aloha: 10-5: Deployment guides: Remote desktop
Haproxy.com: Blog: Microsoft remote desktop services rds load balancing and protection
Actually, I really don't know why you are using that configmap.
In my knowledge, nginx-ingress-controller is routing traffic coming in the same port and routing based on host. So if you want to expose your applications on the same port, try using this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Chart.Name }}-ingress
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your-hostname
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: {{ .Chart.Name }}-service
servicePort: {{ .Values.service.nodeport.port }}
Looking in your requirement, I feel that you need a LoadBalancer rather than Ingress

Split Horizon DNS in local Minikube cluster and AWS

Initially, I've deployed my frontend web application and all the backend APIS in AWS ECS, each of the backend APIs has a Route53 record, and the frontend is connected to these APIs in the .env file. Now, I would like to migrate from ECS to EKS and I am trying to deploy all these application in a Minikube local cluster. I would like to keep my .env in my frontend application unchanged(using the same URLs for all the environment variables), the application should first look for the backend API inside the local cluster through service discovery, if the backend API doesn't exist in the cluster, it should connect to the the external service, which is the API deployed in the ECS. In short, first local(Minikube cluster)then external(AWS). How to implement this in Kubernetes?
http:// backendapi.learning.com --> backend API deployed in the pod --> if not presented --> backend API deployed in the ECS
.env
BACKEND_API_URL = http://backendapi.learning.com
one of the example in the code in which the frontend is calling the backend API
export const ping = async _ => {
const res = await fetch(`${process.env.BACKEND_API_URL}/ping`);
const json = await res.json();
return json;
}
Assuming that your setup is:
Basing on microservices architecture.
Applications deployed in Kubernetes cluster (frontend and backend) are Dockerized
Applications are capable to be running on top of Kubernetes.
etc.
You can configure your Kubernetes cluster (minikube instance) to relay your request to different locations by using Services.
Service
In Kubernetes terminology "Service" is an abstract way to expose an application running on a set of Pods as a network service.
Some of the types of Services are following:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
You can use Headless Service with selectors and dnsConfig (in Deployment manifest) to achieve the setup referenced in your question.
Let me explain more:
Example
Let's assume that you have a backend:
nginx-one - located inside and outside
Your frontend manifest in most basic form should look following:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
Taking specific look on:
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
Dissecting above part:
dnsConfig - the dnsConfig field is optional and it can work with any dnsPolicy settings. However, when a Pod's dnsPolicy is set to "None", the dnsConfig field has to be specified.
searches: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains.
As for the Services for your backends.
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-one
spec:
clusterIP: None # <-- IMPORTANT
selector:
app: nginx-one
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
Above Service will tell your frontend that one of your backends (nginx) is available through a Headless service (why it's Headless will come in hand later!). By default you could communicate with it by:
service-name (nginx-one)
service-name.namespace.svc.cluster.local (nginx-one.default.svc.cluster.local) - only locally
Connecting to your backend
Assuming that you are sending the request using curl (for simplicity) from frontend to backend you will have a specific order when it comes to the DNS resolution:
check the DNS record inside the cluster
check the DNS record specified in dnsConfig
The specifics of connecting to your backend will be following:
If the Pod with your backend is available in the cluster, the DNS resolution will point to the Pod's IP (not ClusterIP)
If the Pod backend is not available in the cluster due to various reasons, the DNS resolution will first check the internal records and then opt to use DOMAIN.NAME in the dnsConfig (outside of minikube).
If there is no Service associated with specific backend (nginx-one), the DNS resolution will use the DOMAIN.NAME in the dnsConfig searching for it outside of the cluster.
A side note!
The Headless Service with selector comes into play here as its intention is to point directly to the Pod's IP and not the ClusterIP (which exists as long as Service exists). If you used a "normal" Service you would always try to communicate with the ClusterIP even if there is no Pods available matching the selector. By using a headless one, if there is no Pod, the DNS resolution would look further down the line (external sources).
Additional resources:
Minikube.sigs.k8s.io: Docs: Start
Aws.amazon.com: Blogs: Compute: Enabling dns resolution for amazon eks cluster endpoints
EDIT:
You could also take a look on alternative options:
Alernative option 1:
Use rewrite rule plugin in CoreDNS to rewrite DNS queries for backendapi.learning.com to backendapi.default.svc.cluster.local
Alernative option 2:
Add hostAliases to the Frontend Pod
You can also use Configmaps to re-use .env files.

How to alway allow HTTP OPTION request in istio?

We've designed our API to use Istio JWT authentication which is mandatory and at the same time we've used the CORS. The problem is our JS code will do ajax call and HTTP Option pre-flight request will be called without JWT Authorization header. Unfornately the pre-flight request will be blocked by Istio. How to solve it?
Not sure if I understood your question correctly, but I think Service Entry will solve this.
ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes).
Service Entry for your example might look like the following:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-https
spec:
hosts:
- api.foobar.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
resolution: DNS

Istio and (or versus) Nginx Ingress Controller

I'am on a journey of testing Istio and at the moment I'am about to test the "canary" capabilities of routing traffic.
For my test, I created a small servicemesh composed of 5 microservices (serviceA, serviceB, serviceC, serviceD, serviceE). Each one is able to call the others. I just pass a path like A,E,C,B,B,D and the request follows this path.
In order to call my servicemesh from outside the cluster I have an Nginx Ingress Controller with an Ingress rule that point on serviceA pod
This is working fine.
The problem I'am facing concerns the traffic switching using a custom header matching like this :
kind: VirtualService
metadata:
name: ServiceA
namespace: demo
labels:
app: demo
spec:
hosts:
- service-a
http:
- route:
- destination:
host: service-a
subset: v1
- match:
- headers:
x-internal-request:
exact: true
route:
- destination:
host: service-a
subset: v2
So here, I want to try to route the traffic to the v2 version of ServiceA when I have the custom header x-internal-request set to true.
Questions :
In order to use this feature, do my services have to be aware of the x-internal-header and do they have to pass it to the next service in the request? Or they do not need to deal with it because Istio do the job for them ?
In order to use this feature, do I need to use the Istio Ingress Controller (with an Istio Gateway) instead of the Nginx Ingress Controller ?
Today, I am using Nginx Ingress Controller to expose some of my services. We choose Nginx because it has some feature likes "external authorization" that saves us a lot of work and if we need to use Istio Ingress controller instead, I'am not sure it offers the same features than Nginx.
Perhaps there is a middle path I do not see
Thank you for your help
Istio is designed to use Envoy deployed on each Pod as sidecars to intercept and proxy network traffic between microservices in service mesh.
You can manipulate with HTTP headers for requests and responses via Envoy as well. According to the official Documentation, custom headers can be added to the request/response in the following order: weighted cluster level headers, route level headers, virtual host level headers and finally global level headers. Because your Envoy proxies are deployed on each relevant service Pod as sidecar, custom HTTP header should pass to each request or response.
I would recommend using Istio Ingress Controller with its core component Istio Gateway which is commonly used for enabling monitoring and routing rules features in Istio mesh services.
There was an issue opened on GitHub about the implementation of Nginx Ingress controller in mesh services and the problem with routing requests.

Reference ConfigMap / Secret in Kubernetes object metadata

I have a Kubernetes cluster provisioned at AWS with kops and I use route 53 mapper to configure ELB based on Service annotations and use namespaces for different environments dev, test, prod with configuration being defined in ConfigMap and Secret objects.
Environments have different hostname and TSL certificates:
kind: Service
apiVersion: v1
metadata:
name: http-proxy-service
labels:
dns: route53
annotations:
domainName: <env>.myapp.example.io
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: |-
arn:aws:acm:eu-central-1:44315247xxxxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
selector:
app: http-proxy
ports:
- name: https
port: 443
Is there a Kubernetes way to reference ConfigMap/Secret objects in the metadata section of the object descriptor so I can have only one file for all environments?
I am looking for pure Kubernetes solution not using any templating before sending file to API via kubecetl.
There is not.
FWIW, it seems nuts that that mapper was designed to pull cert data from annotations on a Service. Service objects are not otherwise secret.
The mapper should be able to consume cert data from a Secret that has well defined fields to indicate what domain should be wired with what cert data in front of what service.