I am running an ingress in GKE. I am routing most of my traffic to one backend but I wish some calls to be routed to another backend. The ingress looks something like this:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: zone-search
port:
name: external
path: /api/v2/zones/location-search
pathType: Prefix
- http:
paths:
- backend:
service:
name: api-service
port:
name: external
path: /*
pathType: ImplementationSpecific
If I do a request like GET /api/v2/zones/location-search, it works fine.
However, if I do GET /api/v2/zones/location-search?foo=bar my request ends up in the api-service backend and not the zone-search as I expected.
I have tried using pathType: ImplementationSpecific and had both path: /api/v2/zones/location-search and path: /api/v2/zones/location-search/* but still no progress. Google requires wildcard to follow a slash but location-search is the endpoint itself and has no slash after it.
I also tried using a default backend with the same result. The problem still seems to be that the url including ?foo=bar doesn't match the path i specified.
I can't do path: /api/v2/zones/* since there are other endpoints in the api that would go to the zone-search backend that isn't supposed to.
Update
I tried using double quotes, plus removing the second
- http:
paths:
and started getting failed_to_pick_backend errors. It ended up solved by changing the health check for the backend service.
I don't know if the health check problem meant that the api-service was selected as a backup when the zone-search service was unhealthy or if one of my two changes solved my initial problem.
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address. You can use Ingress to reuse the load balancer for multiple domain names, subdomains, and to expose multiple services on a single IP address and load balancer. Check out the simple fanout and name-based virtual hosting examples to learn how to configure Ingress for these tasks.
Note: Always modify the properties of the Load Balancer via the Ingress object. Making changes directly on the load balancing resources might get lost or overridden by the GKE Ingress controller.
On the other hand :
Each external HTTP(S) load balancer or internal HTTP(S) load balancer uses a single URL map, which references one or more backend services. One backend service corresponds to each Service referenced by the Ingress.
Additionally, to create an Ingress that specifies rules for routing requests depending on the URL path in the request. When you create the Ingress, the GKE Ingress controller creates and configures an external HTTP(S) load balancer, see the official documentation.
Related
Initially, I've deployed my frontend web application and all the backend APIS in AWS ECS, each of the backend APIs has a Route53 record, and the frontend is connected to these APIs in the .env file. Now, I would like to migrate from ECS to EKS and I am trying to deploy all these application in a Minikube local cluster. I would like to keep my .env in my frontend application unchanged(using the same URLs for all the environment variables), the application should first look for the backend API inside the local cluster through service discovery, if the backend API doesn't exist in the cluster, it should connect to the the external service, which is the API deployed in the ECS. In short, first local(Minikube cluster)then external(AWS). How to implement this in Kubernetes?
http:// backendapi.learning.com --> backend API deployed in the pod --> if not presented --> backend API deployed in the ECS
.env
BACKEND_API_URL = http://backendapi.learning.com
one of the example in the code in which the frontend is calling the backend API
export const ping = async _ => {
const res = await fetch(`${process.env.BACKEND_API_URL}/ping`);
const json = await res.json();
return json;
}
Assuming that your setup is:
Basing on microservices architecture.
Applications deployed in Kubernetes cluster (frontend and backend) are Dockerized
Applications are capable to be running on top of Kubernetes.
etc.
You can configure your Kubernetes cluster (minikube instance) to relay your request to different locations by using Services.
Service
In Kubernetes terminology "Service" is an abstract way to expose an application running on a set of Pods as a network service.
Some of the types of Services are following:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
You can use Headless Service with selectors and dnsConfig (in Deployment manifest) to achieve the setup referenced in your question.
Let me explain more:
Example
Let's assume that you have a backend:
nginx-one - located inside and outside
Your frontend manifest in most basic form should look following:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
Taking specific look on:
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
Dissecting above part:
dnsConfig - the dnsConfig field is optional and it can work with any dnsPolicy settings. However, when a Pod's dnsPolicy is set to "None", the dnsConfig field has to be specified.
searches: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains.
As for the Services for your backends.
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-one
spec:
clusterIP: None # <-- IMPORTANT
selector:
app: nginx-one
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
Above Service will tell your frontend that one of your backends (nginx) is available through a Headless service (why it's Headless will come in hand later!). By default you could communicate with it by:
service-name (nginx-one)
service-name.namespace.svc.cluster.local (nginx-one.default.svc.cluster.local) - only locally
Connecting to your backend
Assuming that you are sending the request using curl (for simplicity) from frontend to backend you will have a specific order when it comes to the DNS resolution:
check the DNS record inside the cluster
check the DNS record specified in dnsConfig
The specifics of connecting to your backend will be following:
If the Pod with your backend is available in the cluster, the DNS resolution will point to the Pod's IP (not ClusterIP)
If the Pod backend is not available in the cluster due to various reasons, the DNS resolution will first check the internal records and then opt to use DOMAIN.NAME in the dnsConfig (outside of minikube).
If there is no Service associated with specific backend (nginx-one), the DNS resolution will use the DOMAIN.NAME in the dnsConfig searching for it outside of the cluster.
A side note!
The Headless Service with selector comes into play here as its intention is to point directly to the Pod's IP and not the ClusterIP (which exists as long as Service exists). If you used a "normal" Service you would always try to communicate with the ClusterIP even if there is no Pods available matching the selector. By using a headless one, if there is no Pod, the DNS resolution would look further down the line (external sources).
Additional resources:
Minikube.sigs.k8s.io: Docs: Start
Aws.amazon.com: Blogs: Compute: Enabling dns resolution for amazon eks cluster endpoints
EDIT:
You could also take a look on alternative options:
Alernative option 1:
Use rewrite rule plugin in CoreDNS to rewrite DNS queries for backendapi.learning.com to backendapi.default.svc.cluster.local
Alernative option 2:
Add hostAliases to the Frontend Pod
You can also use Configmaps to re-use .env files.
I am trying to set up a service and expose it externally on EKS. I have already done it on GKE pretty easily but now AWS is giving me a hard time.
My NGINX yaml looks something like that:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- app.mydomain.com
secretName: myapp-tls
rules:
- host: app.mydomain.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
And then I have my domain app.mydomain.com on Google Domains pointing at the ingress external address. There is also a cert-manager service running in order to support HTTPS.
However, while basically the same setup worked completely out of the box on GKE, EKS gives me a hard time.
From what I understand it has something to do with EKS default LoadBalancer being layer 4 in comparison to Google's layer 7 (Which explains HTTPS not working) but there is also issues with redirections of the domain as it just resolves as the ingress address instead of my desired address and thus my app doesn't show up.
The domain is registered over Google Domains and I'm creating Synthetic Records (for my subdomain) that points to my ingress external address on EKS. The same scheme works perfectly fine on GKE but here it resolves the address as the ingress address instead of my domain which results in 404 on the ingress side.
I was wondering if someone could please point me to how to properly set it up? Should I give up on nginx ingress on EKS and move onto ALB? and how to properly associate the domain?
Thank you very much in advance!
Edit:
output of kubectl describe ingress myapp-ingress:
Name: myapp-ingress
Namespace: default
Address: ********************************-****************.elb.eu-west-1.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
myapp-tls terminates app.mydomain.com
Rules:
Host Path Backends
---- ---- --------
app.mydomain.com
/ myapp-service:80 (172.31.2.238:8000)
Annotations: cert-manager.io/cluster-issuer: myapp-letsencrypt-prod
kubernetes.io/ingress.class: nginx
Events: <none>
Should I give up on nginx ingress on EKS and move onto ALB
No. NGinX ingress controllers work perfectly well on EKS. It is possible to configure them as either layer 4 or layer 7; we use it in layer 7 mode.
Can you update your question with the output of
kubectl get ingress myapp-ingress
I think your ingress path is also incorrect. Unless I'm mistaken that's just routing the root of your app, not all uris. We use the scheme
spec:
rules:
- host: service.d.tld
http:
paths:
- path: /?(.*) # <---
backend:
serviceName: my-service
servicePort: http
Are you seeing errors in the nginx ingress controller's logs? That + kubectl events are both useful for debugging purposes.
I'd disable TLS everywhere and get your service working on http, then work stepwise on getting TLS enabled on the ingress controller.
Edit: Based on your response above,
curl -H "Host: app.mydomain.com" http://<elb-address>:80
SHOULD call through to your service behind the ingress.
How is app.mydomain.com defined? Is it a CNAME to the dns entry?
I have a couple of questions
When we make changes to ingress resource, are there any cases where we have to delete the resource and re-create it again or is kubectl apply -f <file_name> sufficient?
When I add the host attribute without www i.e. (my-domain.in), I am not able to access my application but with www i.e. (www.my-domain.in) it works, what's the difference?
Below is my ingress resource
When I have the host set to my-domain.in, I am unable to access my application, but when i set the host to www.my-domain.in I can access the application.
my domain is on a different provider and I have added CNAME (www) pointing to DNS name of my ALB.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: eks-learning-ingress
namespace: production
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:a982529496:cerd878ef678df
labels:
app: eks-learning-ingress
spec:
rules:
- host: my-domain.in **does not work**
http:
paths:
- path: /*
backend:
serviceName: eks-learning-service
servicePort: 80
First answering your question 1:
When we make changes to ingress resource, are there any cases where we have to delete the resource and re-create it again or is kubectl apply -f sufficient?
In theory, yes, the kubectl apply is the correct way, either it will show ingress unchanged or ingress configured.
Other valid documented option is kubectl edit ingress INGRESS_NAME which saves and apply at the end of the edition if the output is valid.
I said theory because bugs happen, so we can't fully discard it, but bug is the worst case scenario.
Now the blurrier question 2:
When I add the host attribute without www i.e. (my-domain.in), I am not able to access my application but with www i.e. (www.my-domain.in) it works, what's the difference?
To troubleshoot it we need to isolate the processes, like in a chain we have to find which link is broken. One by one:
Endpoint > Domain Provider> Cloud Provider > Ingress > Service > Pod.
DNS Resolution (Domain Provider)
DNS Resolution (Cloud Provider)
Kubernetes Ingress (Ingress > Service > Pod)
DNS Resolution
Domain Provider:
To the Internet, who answers for my-domain.in is your Domain Provider.
What are the rules for my-domain.in and it's subdomains (like www.my-domain.in or admin.my-domain.in)?
You said "domain is on a different provider and I have added CNAME (www) pointing to DNS name of my ALB."
Are my-domain.in and my-domain.in being redirected to the ALB address instinctively?
How does it handle URL subdomains? how the request is passed on to your Cloud?
Cloud Provider:
Ok, the cloud provider is receiving the request correctly and distinctly.
Does your ALB have generic or specific rules for subdomains or path requests?
Test with another host, a different VM with a web server.
Check ALB Troubleshooting Page
Kubernetes Ingress
Usually we would start the troubleshoot from this part, but since you mentioned it works with www.my-domain.in, we can presume that your service, deployment and even ingress structure is working correctly.
You can check the Types of Ingress Docs to get a few examples of how it should work.
Bottom Line: I believe your DNS has a route for www.my-domain.in but the root domain has no route to your cloud provider that's why it's only working when you are enabling the ingress for www.
We have a microservice architecture based on Kubernetes in Amazon EKS with Ambassador as API Gateway.
We have 2 Ambassadors: 1 public and 1 private. So we have services that are only accessible by services in the cluster or VPN, and we have some services that are public.
We have the need for making private some URL paths in the public services. For example, we have a public API that is accessible in api.company.com, and we want to leave all paths public like api.company.com/createuser, api.company.com/login, etc... but for other paths we want to make them private, for example: api.company.com/swagger.html.
We know that we could enable authentication for those paths in the API, but we are looking for a solution without auth.
An example of how we configure K8s service with Ambassador for public services:
apiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: backends_mapping
prefix: /
ambassador_id: ambassador-public
service: backends.svc:8080
host: api.mycompany.com
labels:
app: backends
name: backends
namespace: svc
spec:
ports:
- name: http-backends
port: 8080
protocol: TCP
targetPort: http-api
selector:
app: backends
type: ClusterIP
Not sure what do you mean by without auth. You will need some sort of check to serve internal content.
One approach to achieve this can be(Note this is a high level overview).
You can make the service private, do not expose this service directly.
Prefix all your internal routes with say /internal/ or /private/ prefix.
So api.company.com/swagger.html becomes api.company.com/internal/swagger.html
You can create a load balancer that points to this middleware.
Middleware(public service) will intercept all the requests. I think Nginx can be used here. If the request has /internal/ path check if it satisfies the condition(origin, internal network etc).
If the check passes, redirect to private service.
If the check fails return 403 forbidden or whatever response code that fits.
Cilium can do just what you want:http://docs.cilium.io/en/stable/policy/language/#http
Basically you can specify L7 network policies which will only allow access to some of you API paths from certain pods.
Cilium project page: https://cilium.io/
Layer 7 policies example: http://docs.cilium.io/en/stable/policy/language/#http
EKS install guide: http://docs.cilium.io/en/v1.4/gettingstarted/k8s-install-eks/?highlight=eks
Disclaimer: I am part of the team that develops Cilium.
I'am on a journey of testing Istio and at the moment I'am about to test the "canary" capabilities of routing traffic.
For my test, I created a small servicemesh composed of 5 microservices (serviceA, serviceB, serviceC, serviceD, serviceE). Each one is able to call the others. I just pass a path like A,E,C,B,B,D and the request follows this path.
In order to call my servicemesh from outside the cluster I have an Nginx Ingress Controller with an Ingress rule that point on serviceA pod
This is working fine.
The problem I'am facing concerns the traffic switching using a custom header matching like this :
kind: VirtualService
metadata:
name: ServiceA
namespace: demo
labels:
app: demo
spec:
hosts:
- service-a
http:
- route:
- destination:
host: service-a
subset: v1
- match:
- headers:
x-internal-request:
exact: true
route:
- destination:
host: service-a
subset: v2
So here, I want to try to route the traffic to the v2 version of ServiceA when I have the custom header x-internal-request set to true.
Questions :
In order to use this feature, do my services have to be aware of the x-internal-header and do they have to pass it to the next service in the request? Or they do not need to deal with it because Istio do the job for them ?
In order to use this feature, do I need to use the Istio Ingress Controller (with an Istio Gateway) instead of the Nginx Ingress Controller ?
Today, I am using Nginx Ingress Controller to expose some of my services. We choose Nginx because it has some feature likes "external authorization" that saves us a lot of work and if we need to use Istio Ingress controller instead, I'am not sure it offers the same features than Nginx.
Perhaps there is a middle path I do not see
Thank you for your help
Istio is designed to use Envoy deployed on each Pod as sidecars to intercept and proxy network traffic between microservices in service mesh.
You can manipulate with HTTP headers for requests and responses via Envoy as well. According to the official Documentation, custom headers can be added to the request/response in the following order: weighted cluster level headers, route level headers, virtual host level headers and finally global level headers. Because your Envoy proxies are deployed on each relevant service Pod as sidecar, custom HTTP header should pass to each request or response.
I would recommend using Istio Ingress Controller with its core component Istio Gateway which is commonly used for enabling monitoring and routing rules features in Istio mesh services.
There was an issue opened on GitHub about the implementation of Nginx Ingress controller in mesh services and the problem with routing requests.