I am having trouble upgrading our CLB to a NLB. I did a manual upgrade via the wizard through the console, but the connectivity wouldn't work. This upgrade is needed so we can use static IPs in the loadbalancer. I think it needs to be upgraded through kubernetes, but my attempts failed.
What I (think I) understand about this setup is that this loadbalancer was set up using Helm. What I also understand is that the ingress (controller) is responsible for redirecting http requests to https. and that this lb is working on layer 4.
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.30.0
component: controller
heritage: Tiller
release: nginx-ingress-external
name: nginx-ingress-external-controller
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/nginx-ingress-external-controller
spec:
clusterIP: 172.20.41.16
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30854
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30621
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress-external
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: xxx.region.elb.amazonaws.com
How would I be able to perform the upgrade by modifying this configuration file?
As #Jonas pointed out in the comments section, creating a new LoadBalancer Service with the same selector as the existing one is probably the fastest and easiest method. As a result we will have two LoadBalancer Services using the same ingress-controller.
You can see in the following snippet that I have two Services (ingress-nginx-1-controller and ingress-nginx-2-controller) with exactly the same endpoint:
$ kubectl get pod -o wide ingress-nginx-1-controller-5856bddb98-hb865
NAME READY STATUS RESTARTS AGE IP
ingress-nginx-1-controller-5856bddb98-hb865 1/1 Running 0 55m 10.36.2.8
$ kubectl get svc ingress-nginx-1-controller ingress-nginx-2-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP
ingress-nginx-1-controller LoadBalancer 10.40.15.230 <PUBLIC_IP>
ingress-nginx-2-controller LoadBalancer 10.40.11.221 <PUBLIC_IP>
$ kubectl get endpoints ingress-nginx-1-controller ingress-nginx-2-controller
NAME ENDPOINTS AGE
ingress-nginx-1-controller 10.36.2.8:443,10.36.2.8:80 39m
ingress-nginx-2-controller 10.36.2.8:443,10.36.2.8:80 11m
Additionally to avoid downtime, we can first change the DNS records to point at the new LoadBalancer and after the propagation time we can safely delete the old LoadBalancer Service.
Related
I have created a cluster on AWS EC2 using kops consisting of a master node and two worker nodes, all with public IPv4 assigned.
Now, I want to create a deployment with a service using NodePort to expose the application to the public.
After having created the service, I retrieve the following information, showing that it correctly identified my three pods:
nlykkei:~/projects/k8s-examples$ kubectl describe svc hello-svc
Name: hello-svc
Namespace: default
Labels: app=hello
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hello"},"name":"hello-svc","namespace":"default"},"spec"...
Selector: app=hello-world
Type: NodePort
IP: 100.69.62.27
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30001/TCP
Endpoints: 100.96.1.5:8080,100.96.2.3:8080,100.96.2.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
However, when I try to visit any of my public IPv4's on port 30001, I get no response from the server. I have already created a Security Group allowing all ingress traffic to port 30001 for all of the instances.
Everything works with Docker Desktop for Mac, and here I notice the following service field not present in the output above:
LoadBalancer Ingress: localhost
I've already studied https://kubernetes.io/docs/concepts/services-networking/service/, and think that NodePort should serve my needs?
Any help is appreciated!
So you want to have a service able to be accessed from public. In order to achieve this I would recommend to create a ClusterIP service and then an Ingress for that service. So, saying that you have the deployment hello-world serving at 8081 you will then have the following two objects:
Service:
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- name: service
port: 8081(or whatever you want)
protocol: TCP
targetPort: 8080 (here goes the opened port in your pods)
selector:
app: hello-world
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app: hello-world
name: hello-world
spec:
rules:
- host: hello-world.mycutedomainname.com
http:
paths:
- backend:
serviceName: hello-world
servicePort: 8081 (or whatever you have set for the service port)
path: /
Note: the name tag in the service's port is optional.
i am fairly new to Istio - so far i have a k8s cluster (using kops) on AWS , behind ELB.
All traffic is routed via TCP.
Ingress gateway service is configured as NodePort with following config
istio-system istio-ingressgateway NodePort 100.65.241.150 <none> 15020:31038/TCP,80:30205/TCP,31400:30204/TCP,15029:31714/TCP,15030:30016/TCP,15031:32508/TCP,15032:30110/TCP,15443:32730/TCP
I have used 'demo' helm option to deploy Istio 1.4.0.
Have created gateway, VS and DR with following config -
Gateway is in istio-system namespace, VS and DR on default namespace
kind: Gateway
metadata:
name: ingress-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: webapp
namespace: default
spec:
hosts:
- "*"
gateways:
- ingress-gateway
http:
- route:
- destination:
host: webapp
subset: original
weight: 100
- destination:
host: webapp
subset: v2
weight: 0
---
kind: DestinationRule
apiVersion: networking.istio.io/v1alpha3
metadata:
name: webapp
namespace: default
spec:
host: webapp
subsets:
- labels:
version: original
name: original
- labels:
version: v2
name: v2
Service pods listen on port 80 - and i have tested via port forwarding - and are functioning as expected.
Although when i do curl on https://hostname externally i get a
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
i have enabled debug logging in the envoy - but dont see anything meaningful in the logs relating to the timeout.
Any suggestion on where i might be going wrong?
Do i need to add any service annotations relating to ELB in istio ingress gateway?
Any other suggestions?
I found few things which need to be fixed
1. Connect with loadbalancer
As I mentioned in comments you need to fix your ingress-gateway to automaticly get EXTERNAL-IP addres as in istio documentation, for now your ingress is a NodePort so as far as I'm concerned it won't work, you can configure it to use with nodeport, but I assume you want the loadbalancer.
The first step would be to change istio-ingressgateway svc type from NodePort to loadbalancer and check if you get the EXTERNAL-IP.
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
It should look like there
kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 172.21.109.129 130.211.10.121 80:31380/TCP,443:31390/TCP,31400:31400/TCP 17h
And then everything goes through the external-ip address which is 130.211.10.121
2. Fix your yamls
Note, for tcp traffic like that, we must match on the incoming port, in this case port 31400
Check this example from istio documentation
Specially this part with gateway, virtual service and destination rule.
You should add this to your virtual service.
tcp:
- match:
- port: 31400
3. Remember about namespaces.
In your example, because it's default it should work, but if you create another namespace, remember that if gateway and virtual service are in another namespace then your need to show virtual service where is the gateway.
Example here
Specially the part in virtual service
gateways:
- some-config-namespace/my-gateway
I hope it help you with your issues. Let me know if you have any more questions.
I am setting up NGINX ingress controller on AWS EKS.
I went through k8s Ingress resource and it is very helpful to understand we map LB ports to k8s service ports with e.g file def. I installed nginx controller till pre-requisite step. Then the tutorial directs me to create an ingress resource.
https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#create-an-ingress-resource
But below it is telling me to apply a service config. I am confused with this provider-specific step. Which is different in terms of kind, version, spec definition (Service vs Ingress).
https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml
I am missing something here?
This is a concept that is at first a little tricky to wrap your head around. The Nginx ingress controller is nothing but a service of type LoadBalancer. What is does is be the public-facing endpoint for your services. The IP address assigned to this service can route traffic to multiple services. So you can go ahead and define your services as ClusterIP and have them exposed through the Nginx ingress controller.
Here's a diagram to portray the concept a little better:
image source
On that note, if you have acquired a static IP for your service, you need to assign it to your Nginx ingress-controller. So what is an ingress? Ingress is basically a way for you to communicate to your Nginx ingress-controller how to direct traffic incoming to your LB public IP. So as it is clear now, you have one loadbalancer service, and multiple ingress resources. Each ingress corresponds to a single service that can change based on how you define your services, but you get the idea.
Let's get into some yaml code. As mentioned, you will need the ingress controller service regardless of how many ingress resources you have. So go ahead and apply this code on your EKS cluster.
Now let's see how you would expose your pod to the world through Nginx-ingress. Say you have a wordpress deployment. You can define a simple ClusterIP service for this app:
apiVersion: v1
kind: Service
metadata:
labels:
app: ${WORDPRESS_APP}
namespace: ${NAMESPACE}
name: ${WORDPRESS_APP}
spec:
type: ClusterIP
ports:
- port: 9000
targetPort: 9000
name: ${WORDPRESS_APP}
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: ${WORDPRESS_APP}
This creates a service for your wordpress app which is not accessible outside of the cluster. Now you can create an ingress resource to expose this service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: ${NAMESPACE}
name: ${INGRESS_NAME}
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- ${URL}
secretName: ${TLS_SECRET}
rules:
- host: ${URL}
http:
paths:
- path: /
backend:
serviceName: ${WORDPRESS_APP}
servicePort: 80
Now if you run kubectl get svc you can see the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress ClusterIP 10.23.XXX.XX <none> 9000/TCP,80/TCP,443/TCP 1m
nginx-ingress-controller LoadBalancer 10.23.XXX.XX XX.XX.XXX.XXX 80:X/TCP,443:X/TCP 1m
Now you can access your wordpress service through the URL defined, which maps to the public IP of your ingress controller LB service.
the NGINX ingress controller is the actual process that shapes your traffic to your services. basically like the nginx or loadbalancer installation on a traditional vm.
the ingress resource (kind: Ingress) is more like the nginx-config on your old VM, where you would define host mappings, paths and proxies.
I want to set up an ingress controller on AWS EKS for several microservices that are accessed from an external system.
The microservices are accessed via virtual host-names like svc1.acme.com, svc2.acme.com, ...
I set up the nginx ingress controller with a helm chart: https://github.com/helm/charts/tree/master/stable/nginx-ingress
My idea was to reserve an Elastic IP Address and bind the nginx-controller to that IP by setting the variable externalIP.
This way I should be able to access the services with a stable wildcard DNS entry *.acme.com --> 54.72.43.19
I can see that the ingress controller service get the externalIP, but the IP is not accessible.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-ingress-controller LoadBalancer 10.100.45.119 54.72.43.19 80:32104/TCP,443:31771/TCP 1m
Any idea why?
Update:
I installed the ingress controller with this command:
helm install --name ingress -f values.yaml stable/nginx-ingress
Here is the gist for values, the only thing changed from the default is
externalIPs: ["54.72.43.19"]
https://gist.github.com/christianwoehrle/3b136023b1e0085b028a67ca6a0959b7
Maybe you can achieve that by using a Network Load Balancer (https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html), that supports fixed IPs, as the backing for your Nginx ingress, eg (https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/):
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
I am using istio 1.0.2 version with istio-demo-auth.yaml, I have one mssql and activemq deployed in the same namespaces with other applications, both were be injected by istioctl. The applications can connect to those two services inside the cluster, but I make those two services' type as NodePort, it succeeded, but I cannot access those nodeport(52433, 51618, or 58161).
kubectl get svc -n $namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
amq-master-01 NodePort 10.254.176.151 61618:51618/TCP,8161:58161/TCP 4h
mssql-master NodePort 10.254.209.36 2433:52433/TCP 33m
kubectl get deployment -n $namespace
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
activemq 1 1 1 1 4h
mssql-master 1 1 1 1 44m
Then I try to use gateway and virtualservice for using ingressgateway tcp port 31400. It works, as below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tcp-gateway
namespace: multitenancy
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mssql-tcp
namespace: multitenancy
spec:
gateways:
- tcp-gateway
hosts:
- "*"
tcp:
- match:
- port: 31400
route:
- destination:
host: mssql-master
port:
number: 2433
My question is,
1. How to configure for another http connection for 61618 or other tcp connections? Currently I can only use 31400 for one service(mssql-2433).
2. Why is that nodeport is not working after I inject those application into istio, how could it be work?
Thanks.
Referring to the documentation:
Type NodePort
If you set the type field to NodePort, the Kubernetes master will allocate a port from a range specified by --service-node-port-range flag (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your Service. That port will be reported in your Service’s .spec.ports[*].nodePort field.
Just update your config of all masters and you will be able to allocate any port.
Regarding to the second question:
I suggest you to create an issue on github, because it looks like a bug, there are no restrictions to use nodePort in the documentation.