Failed to create an ALB for my EKS cluster? - amazon-web-services

I deployed a EKS cluster in AWS. I'd like to create a ALB infront of my cluster. I use below command:
eksctl create iamserviceaccount --namespace default --name alb-ingress-controller --cluster $componentName --attach-policy-arn $servicePolicyArn --approve --override-existing-serviceaccounts to create a service account.
below is the ingress I created in k8s:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: es-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: es-entrypoint
servicePort: 80
After apply the config, I got an empty address when run:
$ kubectl get ingress/es-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
es-ingress <none> * 80 2d5h
I am able to see the service account:
$ kubectlaws get serviceaccount alb-ingress-controller
NAME SECRETS AGE
alb-ingress-controller 1 31h
what did I do wrong?

first of all it would be good to know how did you setup the ALB.
A Service Account is not required, instead you need to have an Ingress Controller. Without that, your Ingress resource is useless. There are a lot of different Ingress Controllers, one of the easiest is the ingress-nginx. But just check this awesome comparison: https://docs.google.com/spreadsheets/d/191WWNpjJ2za6-nbG4ZoUMXMpUK8KlCIosvQB0f-oq3k/edit#gid=907731238
So, the easiest way is
Setup an Ingress Controller
Configure the Ingress Controller as NodePort with a port like 30080
Setup your ALB and configure the AWS Target Group to use the NodePort with 30080
Setup the Ingress resource like above (and you dont need the wildcard in the path)
Now all the traffic from the ALB will redirected to your NodePort (the Ingress Controller). The Ingress ressource is responsible to configure the Nginx configuration inside the Ingress Controller. Thats it!
The whole traffic flow could look like this
If this doesnt work
Check the Ingress Controller logs kubectl logs --follow --namespace YOURNAMESPACE NAMEOFTHEINGRESSCONTROLLER
If you dont get any logs there, just enable the AWS ALB logs and check them
I hope that was helpful, if not, please provide some more detailed information about your infrastructure.

Related

How can I set up an Ingress to connect to a ClusterIP service?

Objective
I am have deployed Apache Airflow on AWS' Elastic Kubernetes Service using Airflow's Stable Helm chart. My goal is to create an Ingress to allow others to access the airflow webserver UI via their browser. It's worth mentioning that I am deploying on EKS using AWS Fargate. My experience with Kubernetes is somewhat limited, and I have not set an Ingress myself before.
What I have tried to do
I am currently able to connect to the airflow web-server pod via port-forwarding (like kubectl port-forward airflow-web-pod 8080:8080). I have tried setting the Ingress through the Helm chart (documented here). After which:
Running kubectl get ingress -n dp-airflow I got:
NAME CLASS HOSTS ADDRESS PORTS AGE
airflow-flower <none> foo.bar.com 80 3m46s
airflow-web <none> foo.bar.com 80 3m46s
Then running kubectl describe ingress airflow-web -n dp-airflow I get:
Name: airflow-web
Namespace: dp-airflow
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/airflow airflow-web:web (<redacted_ip>:8080)
Annotations: meta.helm.sh/release-name: airflow
meta.helm.sh/release-namespace: dp-airflow
I am not sure what did I need to put into the browser, so I have tried using http://foo.bar.com/airflow as well as the cluster endpoint/ip without success.
This is how the airflow webservice service looks like:
Running kubectl get services -n dp-airflow, I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
airflow-web ClusterIP <redacted_ip> <none> 8080/TCP 28m
Other things I have tried
I have tried creating an Ingress without the Helm chart (I am using Terraform), like:
resource "kubernetes_ingress" "airflow_ingress" {
metadata {
name = "ingress"
}
spec {
backend {
service_name = "airflow-web"
service_port = 8080
}
rule {
http {
path {
backend {
service_name = "airflow-web"
service_port = 8080
}
path = "/airflow"
}
}
}
}
}
However I was still not able to connect to the web UI. What are the steps that I need to take to set up an Ingress? Which address do I need to use in my browser to connect to the web UI?
I am happy to provide further details if needed.
It sound like you have created Ingress resources. That is a good step. But for those Ingress resources to have any effect, you also need an Ingress Controller than can realize your Ingress to an actual load balancer.
In an AWS environment, you should look at AWS Load Balancer Controller that creates an AWS Application Load Balancer that is configured according your Ingress resources.
Ingress to connect to a ClusterIP service?
First, the default load balancer is classic load balancer, but you probably want to use the newer Application Load Balancer to be used for your Ingress resources, so on your Ingress resources add this annotation:
annotations:
kubernetes.io/ingress.class: alb
By default, your services should be of type NodePort, but as you request, it is possible to use ClusterIP services as well, when you on your Ingress resource also add this annotation (for traffic mode):
alb.ingress.kubernetes.io/target-type: ip
See the ALB Ingress documentation for more on this.

404 not found for GKE Ingress

I am trying with Ingress feature in GKE Cluster` . Following are the steps I followed
1. Create deployment with below command
kubectl create deployment hello --image=gcr.io/google-samples/hello-app:2.0
2. Exposed the deployment of type NodePort
kubectl expose deployment hello --port=8080 --type=NodePort
3. my ingress manifests is as follows
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.class: gce
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: hello
servicePort: 8080
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello NodePort 10.0.41.132 <None> 8080:30820/TCP 113m
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
basic-ingress * 35.X.X.X 80 26m
But when I access the external IP using curl , it throws 404 not found .
Below error can be seen from GKE Console
I think I am missing something in the ingress definition . Please guide to fix this.
Image definition has been taken from this guide
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
I have tried to create the same ingress from the scratch (none cluster, none ingress service, none service), and I was able to create it and perform a curl successfully, these were the steps:
1.- Create a cluster (It does not matter the details, just create it as you want)
2.- Connect to the cluster and install kubectl-> sudo apt-get install kubectl
3.- kubectl create deployment hello --image=gcr.io/google-samples/hello-app:2.0
4.- kubectl expose deployment hello --port=8080 --type=NodePort
5.- Create the ingress as follows (Without annotations), as per Creating an Ingress resource
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
6.- Review your ingress kubectl get ingress basic-ingress
#cloudshell:$ kubectl get ingress basic-ingress
NAME HOSTS ADDRESS PORTS AGE
basic-ingress * 130.211.xx.xxx 80 5m46s
7.- And now is working when I have performed the curl:
#cloudshell:$ curl http://130.211.xx.xxx
Hello, world!
Version: 2.0.0
Hostname: hello-86dbf5b7c6-f7qgl
You were using ingress annotations, and it is another way to create ingress services, but a little bit more advanced. My suggestion is to create it as simple as possible first.
Please try it at this way and let me know about it.
The same YAML definitions are failing for me in a SharedVPC . This got resolved after adding the below firewall rule
gcloud compute firewall-rules create k8s-fw-l7--60cada75751e6d79 --network <SharedVPC> --description "GCE L7 firewall rule" --allow tcp:30000-32767 --source-ranges 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 --target-tags gke-privatetestgkecluster-cf899a18-node --project <Project>
https://cloud.google.com/load-balancing/docs/health-checks

AWS EKS Ingress - No Address

I have a kubernetes application using AWS EKS. With the below details:
Cluster:
+ Kubernetes version: 1.15
+ Platform version: eks.1
Node Groups:
+ Instance Type: t3.medium
+ 2(Minimum) - 2(Maximum) - 2(Desired) configuration
[Pods]
+ 2 active pods
[Service]
+ Configured Type: ClusterIP
+ metadata.name: k8s-eks-api-service
[rbac-role.yaml]
https://pastebin.com/Ksapy7vK
[alb-ingress-controller.yaml]
https://pastebin.com/95CwMtg0
[ingress.yaml]
https://pastebin.com/S3gbEzez
When I tried to pull the ingress details. Below are the values (NO ADDRESS)
Host: *
ADDRESS:
My goal is to know why the address has no value. I expect to have private or public address to be used by other service on my application.
solution fitted my case is adding ingressClassName in ingress.yaml or configure default ingressClass.
add ingressClassName in ingress.yaml
#ingress.yaml
metadata:
name: ingress-nginx
...
spec:
ingressClassName: nginx <-- add this
rules:
...
or
edit ingressClass yaml
$ kubectl edit ingressclass <ingressClass Name> -n <ingressClass namespace>
#ingressClass.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
ingressclass.kubernetes.io/is-default-class: "true" <-- add this
....
link
In order for your kubernetes cluster to be able to get an address you will need to be able to manage route53 from withtin the cluster, for this task I would recommend to use externalDns.
In a broader sense, ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.
source: ExternalDNS
This happened with me too that after all the setup, I was not able to see the ingress address. The best way to debug this issue is to check the logs for the ingress controller. You can do this by:
Get the Ingress controller po name by using: kubectl get po -n kube-system
Check logs for the po using: kubectl logs <po_name> -n kube-system
This will point you to the exact issue as to why you are not seeing the address.

Kubernetes dashboard in aws EC2 instance?

I have started 2 ubuntu 16 EC2 instance(one for master and other for worker). Everything working OK.
I need to setup dashboard to view on my machine. I have copied admin.ctl and executed the script in my machine's terminal
kubectl --kubeconfig ./admin.conf proxy --address='0.0.0.0' --port=8002 --accept-hosts='.*'
Everything is fine.
But in browser when I use below link
http://localhost:8002/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
I am getting Error: 'dial tcp 192.168.1.23:8443: i/o timeout'
Trying to reach: 'https://192.168.1.23:8443/'
I have enabled all traffics in security policy for aws. What am I missing? Please point me a solution
If you only want to reach the dashboard then it is pretty easy, get the IP address of your EC2 instance and the Port on which it is serving dashboard (kubectl get services --all-namespaces) and then reach it using:
First:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
And in your browswer:
http://<IP>:<PORT>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Note that this is a possible security vulnerability as you are accepting all traffic (AWS firewall rules) and also all connections for your kubectl proxy (--address 0.0.0.0 --accept-hosts '.*') so please narrow it down or use different approach. If you have more questions feel free to ask.
Have you tried putting http:// in front of localhost?
I don't have enough rep to comment, else I would.
For bypassing dashboard with token. You have to execute the below code
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
EOF
After this you can skip without providing token. But this will cause security issues.

How to define external ip for kubernetes ingress

I have question about kubernetes ingress.
I want to use ingress with my Amazon account and/or private cloud and want to assign external IP.
It is possible to assign external ip for services :
Services documentation - chapter external IP
but cannot find a way to do that for Ingress : Ingress documentation.
My question is direct especially to Kubernetes team.
Similar question was asked by Simon in this topic : How to force SSL for Kubernetes Ingress on GKE 2
but he asked about GKE while I am interested in private cloud, AWS.
Thank you in advance.
[UPDATE]
Guys found that my question may was answered already in this topic.
Actually answer that #anigosa put there is specific for GCloud.
His solution won't work in private cloud neither in AWS cloud. In my opinion the reason for that is that he use type: LoadBalancer (which cannot be used in private cloud) and use loadBalancerIP property which will works only on GCloud(for AWS it cause error : "Failed to create load balancer for service default/nginx-ingress-svc: LoadBalancerIP cannot be specified for AWS ELB
").
Looking at this issue, it seems you can define annotation on your service and map it to existing elastic ip.
Something like that:
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: <>
spec:
type: LoadBalancer
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
Please note this will create ELB for this service, not ingress.
As an ingress is simply one service (=ELB) handling requests for many other services, it should be possible to do something similar for ingress, but I couldn't find any docs for it.
There are two main ways you can do this. One is using a static IP annotation as shown in Omer's answer (which is cloud specific, and normally relies on the external IP being setup beforehand), the other is using an ingress controller (which is generally cloud agnostic).
The ingress controller will obtain an external IP on its service and then pass that to your ingress which will then use that IP as its own.
Traffic will then come into the cluster via the controller's service and the controller will route to your ingress.
Here's an example of the ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: my-ingress-class
spec:
tls:
- hosts:
- ssl.somehost.com
rules:
- host: ssl.somehost.com
http:
paths:
- backend:
serviceName: backend-service
servicePort: 8080
The line
kubernetes.io/ingress.class: my-ingress-class
Tells the cluster we want only an ingress controller that handles this "class" of ingress traffic -- you can have multiple ingress controllers in the cluster, each declaring they are handling a different class of ingress traffic so when you install the ingress controller, you also need to declare which ingress class you want it to handle.
Caveat: If you do not declare the ingress class on an ingress resource, ALL the ingress controllers in the cluster will attempt to route traffic to the ingres
Now if you want an external IP that is private, you can do that via the controller. For AWS and GCP you have annotations that tell the cloud provider you want an IP that is internal only by adding a specific annotation to the loadbalancer of the ingress controller
For AWS:
service.beta.kubernetes.io/aws-load-balancer-type: "internal"
For GCP:
networking.gke.io/load-balancer-type: "Internal"
or (< Kubernetes 1.17)
cloud.google.com/load-balancer-type: "Internal"
Your ingress will inherit the IP obtained by the ingress controller's loadbalancer