I have question about kubernetes ingress.
I want to use ingress with my Amazon account and/or private cloud and want to assign external IP.
It is possible to assign external ip for services :
Services documentation - chapter external IP
but cannot find a way to do that for Ingress : Ingress documentation.
My question is direct especially to Kubernetes team.
Similar question was asked by Simon in this topic : How to force SSL for Kubernetes Ingress on GKE 2
but he asked about GKE while I am interested in private cloud, AWS.
Thank you in advance.
[UPDATE]
Guys found that my question may was answered already in this topic.
Actually answer that #anigosa put there is specific for GCloud.
His solution won't work in private cloud neither in AWS cloud. In my opinion the reason for that is that he use type: LoadBalancer (which cannot be used in private cloud) and use loadBalancerIP property which will works only on GCloud(for AWS it cause error : "Failed to create load balancer for service default/nginx-ingress-svc: LoadBalancerIP cannot be specified for AWS ELB
").
Looking at this issue, it seems you can define annotation on your service and map it to existing elastic ip.
Something like that:
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: <>
spec:
type: LoadBalancer
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
Please note this will create ELB for this service, not ingress.
As an ingress is simply one service (=ELB) handling requests for many other services, it should be possible to do something similar for ingress, but I couldn't find any docs for it.
There are two main ways you can do this. One is using a static IP annotation as shown in Omer's answer (which is cloud specific, and normally relies on the external IP being setup beforehand), the other is using an ingress controller (which is generally cloud agnostic).
The ingress controller will obtain an external IP on its service and then pass that to your ingress which will then use that IP as its own.
Traffic will then come into the cluster via the controller's service and the controller will route to your ingress.
Here's an example of the ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: my-ingress-class
spec:
tls:
- hosts:
- ssl.somehost.com
rules:
- host: ssl.somehost.com
http:
paths:
- backend:
serviceName: backend-service
servicePort: 8080
The line
kubernetes.io/ingress.class: my-ingress-class
Tells the cluster we want only an ingress controller that handles this "class" of ingress traffic -- you can have multiple ingress controllers in the cluster, each declaring they are handling a different class of ingress traffic so when you install the ingress controller, you also need to declare which ingress class you want it to handle.
Caveat: If you do not declare the ingress class on an ingress resource, ALL the ingress controllers in the cluster will attempt to route traffic to the ingres
Now if you want an external IP that is private, you can do that via the controller. For AWS and GCP you have annotations that tell the cloud provider you want an IP that is internal only by adding a specific annotation to the loadbalancer of the ingress controller
For AWS:
service.beta.kubernetes.io/aws-load-balancer-type: "internal"
For GCP:
networking.gke.io/load-balancer-type: "Internal"
or (< Kubernetes 1.17)
cloud.google.com/load-balancer-type: "Internal"
Your ingress will inherit the IP obtained by the ingress controller's loadbalancer
Related
i just set up a private EKS Cluster with an external DNS. A Service is exposed on a fargate instance and accessible via https://IP. The service is furthermore annotated with
external-dns.alpha.kubernetes.io/internal-hostname: duplicate-clearing-dev.aws.ui.loc
Thus a DNS entry is created by the external DNS (bitnami). Yet it routes to -all- ip addresses i have running in my EKS cluster instead of the one (IP) the service is running on and i don't know why.
A similar setup with Ingress worked just find where the DNS entry routed to a Load Balancer.
So my question is if i miss some kind of selector to route the DNS entry to the only one correct IP.
My service looks like this
apiVersion: v1
kind: Service
metadata:
name: "service-duplicate-clearing"
namespace: "duplicate-clearing"
annotations:
external-dns.alpha.kubernetes.io/internal-hostname: duplicate-clearing-dev.aws.ui.loc
spec:
ports:
- port: 443
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: duplicate-clearing
Thanks in advance,
Eric
What i was missing was the following input for the specs:
externalTrafficPolicy: Local
I'd like to create a nginx ingress controller with AWS internal NLB, the requirement is fix the IP address of NLB endpoint, for example, currently the NLB dns of Nginx ingress service is abc.elb.eu-central-1.amazonaws.com which is resolved to ip address 192.168.1.10, if I delete and re-create nginx ingress controller, I want the NLB DNS must be the same as before.
Having a look in kubernetes service annotation, I did not see any way to re-use existing NLB, however, I find out the annotation service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses in link, as far as I understand that it allow me to set ip address for NLB, but it not work as my expectation, everytime I re-created nginx controller, the ip address is difference, Below is K8s service yaml file.
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: "10.136.103.251"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-00df069133b22"
labels:
helm.sh/chart: ingress-nginx-3.23.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.44.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
spec:
type: LoadBalancer
externalTrafficPolicy: Local
I know this requirement is werid, is it possible to do that?
If your Kubernetes cluster runs on a VPC with more than one subnet (which is probably the case), you must provide a private ip address for each subnet.
I installed the AWS Load balancer controller with the helm chart, then i installed the nginx ingress controller with this helm chart :
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress -f internal-ingress-values.yaml
Here the content of internal-ingress-values.yaml
controller:
ingressClass: nginx
service:
enableHttp: false
enableHttps: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 10.136.103.251, 10.136.104.251
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-00a1a7f9949aa0ba1, subnet-12ea9f1df24aa332c
ingressClassResource:
enabled: true
default: true
According to the documentation the service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses annotation length/order must match subnets
So, you must provide IP addresses and subnet in the same order (don't mismatch).
If you take my example above, you must make sure that :
10.136.103.251 is included in subnet-00a1a7f9949aa0ba1
10.136.104.251 is included in subnet-12ea9f1df24aa332c
It's a good idea to tag your subnets according to the documentation :
Key: kubernetes.io/cluster/my-cluster-name
Value: shared
Key: kubernetes.io/role/internal-elb
Value: 1
I tested this K8S on 1.20 and it works for my project.
Don't provide "ingressClassResource" if you're on K8S <= 1.17.
The only LBs that will be managed (at least at the current version 2.3 of the AWS LB Controller) are "nlb-ip" and "external" types. This is specified at:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/service/annotations/#legacy-cloud-provider
The annotation service.beta.kubernetes.io/aws-load-balancer-type is used to determine which controller reconciles the service. If the annotation value is nlb-ip or external, legacy cloud provider ignores the service resource (provided it has the correct patch) so that the AWS Load Balancer controller can take over. For all other values of the annotation, the legacy cloud provider will handle the service. Note that this annotation should be specified during service creation and not edited later.
I have multiple deployments running of RDP application and they all are exposed with ClusterIP service. I have nginx-ingress controller in my k8s cluster and to allow tcp I have added --tcp-services-configmap flag in nginx-ingress controller deployment and also created a configmap for the same that is shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
3389: “demo/rdp-service1:3389”
This will expose “rdp-service1” service. And I have 10 more such services which needed to be exposed on the same port number but if I add more service in the same configmap like this
...
data
3389: “demo/rdp-service1:3389”
3389: “demo/rdp-service2:3389”
Then it will remove the previous service data and since here I have also deployed external-dns in k8s, so all the records created by ingress using host: ... will starts pointing to the deployment attached with the newly added service in configmap.
Now my final requirement is as soon as I append the rule for a newly created deployment(RDP application) in the ingress then it starts allowing the TCP connection for that, so is there any way to achieve this. Or is there any other Ingress controller available that can solve such type of use case and can also easily be integrated with external-dns ?
Note:- I am using AWS EKS Cluster and Route53 with external-dns.
Posting this answer as a community wiki to explain some of the topics in the question as well as hopefully point to the solution.
Feel free to expand/edit it.
NGINX Ingress main responsibility is to forward the HTTP/HTTPS traffic. With the addition of the tcp-services/udp-services it can also forward the TCP/UDP traffic to their respective endpoints:
Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp udp services
The main issue is that the Host based routing for Ingress resource in Kubernetes is targeting specifically HTTP/HTTPS traffic and not TCP (RDP).
You could achieve a following scenario:
Ingress controller:
3389 - RDP Deployment #1
3390 - RDP Deployment #2
3391 - RDP Deployment #3
Where there would be no Host based routing. It would be more like port-forwarding.
A side note!
This setup would also depend on the ability of the LoadBalancer to allocate ports (which could be limited due to cloud provider specification)
As for possible solution which could be not so straight-forward I would take a look on following resources:
Stackoverflow.com: Questions: Nxing TCP forwarding based on hostname
Doc.traefik.io: Traefik: Routing: Routers: Configuring TCP routers
Github.com: Bolkedebruin: Rdpgw
I'd also check following links:
Aws.amazon.con: Quickstart: Architecture: Rd gateway - AWS specific
Docs.konghq.com: Kubernetes ingress controller: 1.2.X: Guides: Using tcpingress
Haproxy:
Haproxy.com: Documentation: Aloha: 12-0: Deployment guides: Remote desktop: RDP gateway
Haproxy.com: Documentation: Aloha: 10-5: Deployment guides: Remote desktop
Haproxy.com: Blog: Microsoft remote desktop services rds load balancing and protection
Actually, I really don't know why you are using that configmap.
In my knowledge, nginx-ingress-controller is routing traffic coming in the same port and routing based on host. So if you want to expose your applications on the same port, try using this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Chart.Name }}-ingress
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your-hostname
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: {{ .Chart.Name }}-service
servicePort: {{ .Values.service.nodeport.port }}
Looking in your requirement, I feel that you need a LoadBalancer rather than Ingress
I have hard time getting this working with NLB using ingress controller :
https://kubernetes.github.io/ingress-nginx/deploy/#network-load-balancer-nlb
Even subnets are not taking effect here , its not passing my configurations in the API that creates the NLB:
================================
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-
07e3afcd4b7b5d644,eipalloc-0d9cb0154be5ab55d,eipalloc-0e4e5ec3df81aa3ea"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-
061f4a497621a7179,subnet-001c2e5df9cc93960"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
The number of eip allocations must match the number of subnets in the subnet annotation.
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xyz, eipalloc-zzz
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet
You have 3 allocations but only 2 subnets.
In addition, the annotation
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
is missing.
By default this will use scheme "internal".
I assume since you are allocating elastic IP addresses that you might want "internet-facing".
Also, you are using annotations that are meant for "AWS Load Balancer Controller" but you are using an "AWS cloud provider load balancer controller"
The external value for aws-load-balancer-type is what causes the AWS
Load Balancer Controller, rather than the AWS cloud provider load
balancer controller, to create the Network Load Balancer.
docs
You are using service.beta.kubernetes.io/aws-load-balancer-type: nlb
which means that none of the links provided earlier in this answer pertain to your Load Balancer. nlb type is an "AWS cloud provider load balancer controller" not an "AWS Load Balancer Controller"
For "AWS cloud provider load balancer controller" all the docs reference is this.
So, as it turned out - these annotations will be supported only since Kubernetes 1.16, which is "coming soon" on AWS.
Currently supported version is 1.15, which just ignores those annotations...
Considering that you are using AWS-specific annotations here (service.beta.kubernetes.io/aws-load-balancer-eip-allocations) - I assume that this is exactly the reason why it does not work on your case.
As a workaround, I would advice:
Create custom post-deployment script that re-configures newly-created LoadBalancer, after each Kubernetes Service Update.
Switch to use something more conventional, like ELB with your Container, and AutoScaling groups (that's what we did.)
Setup your own Kubernetes Controller (super-hard thingie, which will become completely obsolete and will just be basically a lost of time, as soon as 1.16 is officially out). See this how-to
Wait...
Official statement:
https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html#1-16-prequisites
Full list of annotations (when they will be "supported" ofc):
https://github.com/kubernetes/kubernetes/blob/v1.16.0/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L208-L211
Stay tuned! :(
I have k8s cluster deployed over aws.
I created load balancer service with annotation of :
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
now, I saw that k8s created new elb attached to a sg with outbound role 443 opened to 0.0.0.0/0.
I tried to take a look and see if there's additional annotation that manage source ip's (pre defined ip's instead the 0.0.0.0) and couldn't find.
Do you know if there's kind of option to manage this as part of annotations ?
Make use of loadBalancerSourceRanges in loadbalancer service resource as described here.
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerSourceRanges:
- 10.0.0.0/8
Update:
In case of nginx-ingress you can use nginx.ingress.kubernetes.io/whitelist-source-range annotation.
For more info check this.