I am trying to launch application load balancer(ALB) on AWS EKS. I have already installed Application load balancer controller in my cluster successfully. The tutorial I am following tells me that after creating ingress and applying it, I should see an ALB created in my AWS, which I don't. What could be the reason? Am I missing something?
I have already created and started apple-service and banana-service and their pods too.
Here's the ingress YAML. I can successfully apply this ingress also, but the ALB didn't launch.
I am using EKS k8s version 1.22
kubectl -n kube-system get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 19m
coredns 2/2 2 2 38m
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
my-awesome-app-ingress <none> testingkarlo.ml 80 14m
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-awesome-app-ingress
labels:
app: my-awesome-app
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: testingkarlo.ml
http:
paths:
- path: /apple
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
- path: /banana
pathType: Prefix
backend:
service:
name: banana-service
port:
number: 5678
apple.yaml
kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
targetPort: 5678
type: LoadBalancer
banana.yaml is similar as above.
After applying apple.yaml and banana.yaml ,Two classic Load balancers are launched in AWS.
Related
I have AWS K3S Kubernetes Cluster
I have AWS Load Balancer
I have registered domain
I have registered AWS Certificate
I created CNAME record for my domain and for AWS Load Balancer DNS name
I installed Traefik Ingress Controller on AWS K3S Kubernetes Cluster
I deployed "usermgmt" and "whoami" services to AWS K3S Kubernetes Cluster
I created Traefik Ingress with paths to "usermgmt" and "whoami"
The question is:
How to connect my AWS Load Balancer, which is hosted on my domain, to my services on K3s, using Ingress Traefik Controller?
Or in other words:
How to adapt "traefik service" or "traefik deployment", described below, to use AWS Certificate Resolver for my registered domain?
Or any example of how to use
AWS Load Balancer, AWS Target Group, AWS Security Group, created with Terraform files
in combination with Traefik Ingress Controller and Traefik Ingress Routes, deployed to K3S Kubernetes Cluster, resolved with AWS Certificate.
I currently can't connect to my services through AWS Load Balancer.
The following errors are returned:
404 Page Not Found
502 Bad Gateway
Here are the examples of URLs, which I try:
https://keycloak.skycomposer.net/usermgmt
https://keycloak.skycomposer.net/whoami
I set up correspondent Ingress Routes for "usermgmt" and "whoami" kubernetes services.
Here is some more information:
I created K3S Kubernetes Cluster in AWS with Load Balancer
These are my terraform files:
https://github.com/skyglass/user-management/tree/master/terraform
K3S cluster is deployed to EC2 instance (see "userdata.tpl" script)
I disabled Traefik Ingress Controller deployment, so I could deploy it later.
I found example on how to install "Traefik" to K3S Kubernetes cluster here:
https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd
Unfortunately, this example uses "godaddy" certificate resolver, but my domain is registered with AWS Route 53 and I use AWS certificate manager.
Here are files for "traefik service" and "traefik deployment", which I try to adapt:
traefik-service:
---
apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: kube-system
spec:
# The targetPort entries are required as the Traefik container is listening on ports > 1024
# so that the container can be run as a non-root user and they can bind to these ports.
# Traefik is still accessed over 80 and 443 on the host, but the service routes the traffic
# to ports 8080 and 8443 on the container.
ports:
- protocol: TCP
name: web
port: 80
targetPort: 8080
- protocol: TCP
name: websecure
port: 443
targetPort: 8443
- protocol: TCP
name: admin
port: 8080
targetPort: 9080
selector:
app: traefik
# Set externalTrafficPolicy to Local so that all external traffic intended for
# the Traefik pod goes directly to that local node. If the default of Cluster is
# used instead then the client source IP address is lost, and may hop between nodes.
externalTrafficPolicy: Local
type: LoadBalancer
traefik-deployment:
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: kube-system
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.4
args:
- --api.dashboard=true
- --ping=true
- --accesslog
- --entrypoints.traefik.address=:9080
- --entrypoints.web.address=:8080
- --entrypoints.websecure.address=:8443
# Uncomment the below lines to redirect http requests to https.
# This specifies the port :443 and not the https entrypoint name for the
# redirect as the service is listening on port 443 and directing traffic
# to the 8443 target port. If the entrypoint name "websecure" was used,
# instead of "to=:443", then the browser would be redirected to port 8443.
- --entrypoints.web.http.redirections.entrypoint.to=:443
- --entrypoints.web.http.redirections.entrypoint.scheme=https
- --providers.kubernetescrd
- --providers.kubernetesingress
- --certificatesresolvers.myresolver.acme.tlschallenge=true
- --certificatesresolvers.myresolver.acme.email=postmaster#example.com
- --certificatesresolvers.myresolver.acme.storage=/etc/traefik/certs/acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
# - --certificatesresolvers.godaddy.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
- --log
- --log.level=INFO
livenessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: 9080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 3
resources:
limits:
memory: '100Mi'
cpu: '1000m'
ports:
# The Traefik container is listening on ports > 1024 so the container
# can be run as a non-root user and they can bind to these ports.
- name: web
containerPort: 8080
- name: websecure
containerPort: 8443
- name: admin
containerPort: 9080
volumeMounts:
- name: certificates
mountPath: /etc/traefik/certs
# volumes:
# - name: certificates
# persistentVolumeClaim:
# claimName: traefik-certs-pvc
volumes:
- name: certificates
hostPath:
path: "/Users/dddd/git/aws/letsencrypt:/etc/traefik/certs"
See other files here: https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd
Ideally there should be solution like this:
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REGION:ACCOUNTID:certificate/CERT-ID"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
spec:
type: LoadBalancer
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 443
targetPort: 80
In this solution, I would just provide my AWS Certificate ARN and traefik ingress controller will do everything else.
The similar solution is described in this article:
https://www.ronaldjamesgroup.com/blog/getting-started-with-traefik
But, unfortunately, this solution doesn't work for me too, I tried it without any success.
The following errors are returned:
404 Page Not Found
502 Bad Gateway
when I try Ingress Route Paths for my domain:
https://keycloak.skycomposer.net/usermgmt
https://keycloak.skycomposer.net/whoami
After trying several options, I finally found the solution:
https://github.com/skyglass-examples/aws-k3s-traefik
I created AWS Load Balancer and K3S cluster with Terraform
I created Traefik Ingress Controller kubernetes manifest files
I created kubernetes manifest files for 2 services
I registered AWS Load Balancer DNS name for my domain
I created AWS Certificate for my domain
I used AWS Certificate ARN for Traefik Ingress Controller and AWS HTTPS Load Balancer
Here are my Traefik Ingress Controller manifest files:
traefik-deployment.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-proxy
namespace: kube-system
labels:
app: traefik-proxy
tier: proxy
spec:
replicas: 1
selector:
matchLabels:
app: traefik-proxy
tier: proxy
template:
metadata:
labels:
app: traefik-proxy
tier: proxy
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.2.0-rc1-alpine
name: traefik-proxy
ports:
- containerPort: 80
hostPort: 80
name: traefik-proxy
- containerPort: 8080
name: traefik-ui
args:
- --web
- --kubernetes
traefik-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
namespace: kube-system
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-1:dddddddddd"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
service.beta.kubernetes.io/aws-load-balancer-type: "alb"
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 443
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 80
targetPort: 8080
traefik-ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: traefik-lb
spec:
controller: traefik.io/ingress-controller
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-usermgmt-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/usermgmt"
backend:
serviceName: "usermgmt"
servicePort: 80
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-whoami-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/whoami"
backend:
serviceName: "whoami"
servicePort: 80
See the full code here:
https://github.com/skyglass-examples/aws-k3s-traefik
The code includes:
terraform files for AWS Load Balancer and K3S Kubernetes Cluster
source code for one of the docker containers, which I deployed to K3S
kubernetes manifest files for Traefik Ingress Controller, 2 Kubernetes Services and Traefik Ingress, which exposes these services with secured HTTPS connection on registered domain.
Replace AWS Certificate ARN with correspondent ARN of your certificate
Replace "skycomposer.net" with your domain name (see more details in the Readme file: https://github.com/skyglass-examples/aws-k3s-traefik)
LetsEncrypt not verifying via Kubernetes ingress and loadbalancer in AWS EKS
ClientIssuer
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: my#email.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
Ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- echo0.site.com
secretName: echo-tls
rules:
- host: echo0.site.com
http:
paths:
- backend:
serviceName: echo0
servicePort: 80
Events
12m Normal IssuerNotReady certificaterequest/echo-tls-3171246787 Referenced issuer does not have a Ready status condition
12m Normal GeneratedKey certificate/echo-tls Generated a new private key
12m Normal Requested certificate/echo-tls Created new CertificateRequest resource "echo-tls-3171246787"
4m29s Warning ErrVerifyACMEAccount clusterissuer/letsencrypt-staging Failed to verify ACME account: context deadline exceeded
4m29s Warning ErrInitIssuer clusterissuer/letsencrypt-staging Error initializing issuer: context deadline exceeded
kubectl describe certificate
Name: echo-tls
Namespace: default
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1alpha3
Kind: Certificate
Metadata:
Creation Timestamp: 2020-04-04T23:57:22Z
Generation: 1
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: echo-ingress
UID: 1018290f-d7bc-4f7c-9590-b8924b61c111
Resource Version: 425968
Self Link: /apis/cert-manager.io/v1alpha3/namespaces/default/certificates/echo-tls
UID: 0775f965-22dc-4053-a6c2-a87b46b3967c
Spec:
Dns Names:
echo0.site.com
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
Secret Name: echo-tls
Status:
Conditions:
Last Transition Time: 2020-04-04T23:57:22Z
Message: Waiting for CertificateRequest "echo-tls-3171246787" to complete
Reason: InProgress
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal GeneratedKey 18m cert-manager Generated a new private key
Normal Requested 18m cert-manager Created new CertificateRequest resource "echo-tls-3171246787"
Been going at this for a few days now. I have tried with different domains, but end up with same results. Am I missing anything here/steps. It is based off of this tutorial here
Any help would be appreciated.
Usually with golang applications the error context deadline exceeded means the connection timed out. That sounds like the cert-manager pod was not able to reach the ACME API, which can happen if your cluster has an outbound firewalls, and/or does not have a NAT or Internet Gateway attached to the subnets
This might be worthwhile to look at. I was facing similar issue.
Change LoadBalancer in ingress-nginx service.
Add/Change externalTrafficPolicy: Cluster.
Reason being, pod with the certificate-issuer wound up on a different node than the load balancer did, so it couldn’t talk to itself through the ingress.
Below is complete block taken from https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.1/deploy/static/provider/cloud-generic.yaml
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
#CHANGE/ADD THIS
externalTrafficPolicy: Cluster
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
Currently, I'm trying to create a Kubernetes cluster on Google Cloud with two load balancers: one for backend (in Spring boot) and another for frontend (in Angular), where each service (load balancer) communicates with 2 replicas (pods). To achieve that, I created the following ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
spec:
rules:
- http:
paths:
- path: /rest/v1/*
backend:
serviceName: sample-backend
servicePort: 8082
- path: /*
backend:
serviceName: sample-frontend
servicePort: 80
The ingress above mentioned can make the frontend app communicate with the REST API made available by the backend app. However, I have to create sticky sessions, so that every user communicates with the same POD because of the authentication mechanism provided by the backend. To clarify, if one user authenticates in POD #1, the cookie will not be recognized by POD #2.
To overtake this issue, I read that the Nginx-ingress manages to deal with this situation and I installed through the steps available here: https://kubernetes.github.io/ingress-nginx/deploy/ using Helm.
You can find below the diagram for the architecture I'm trying to build:
With the following services (I will just paste one of the services, the other one is similar):
apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
type: LoadBalancer
And I declared the following ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: sample-cookie
spec:
rules:
- http:
paths:
- path: /rest/v1/*
backend:
serviceName: sample-backend
servicePort: 8082
- path: /*
backend:
serviceName: sample-frontend
servicePort: 80
After that, I run kubectl apply -f sample-nginx-ingress.yaml to apply the ingress, it is created and its status is OK. However, when I access the URL that appears in "Endpoints" column, the browser can't connect to the URL.
Am I doing anything wrong?
Edit 1
** Updated service and ingress configurations **
After some help, I've managed to access the services through the Ingress Nginx. Above here you have the configurations:
Nginx Ingress
The paths shouldn't contain the "", unlike the default Kubernetes ingress that is mandatory to have the "" to route the paths I want.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "sample-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /rest/v1/
backend:
serviceName: sample-backend
servicePort: 8082
- path: /
backend:
serviceName: sample-frontend
servicePort: 80
Services
Also, the services shouldn't be of type "LoadBalancer" but "ClusterIP" as below:
apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
type: ClusterIP
However, I still can't achieve sticky sessions in my Kubernetes Cluster, once I'm still getting 403 and even the cookie name is not replaced, so I guess the annotations are not working as expected.
I looked into this matter and I have found solution to your issue.
To achieve sticky session for both paths you will need two definitions of ingress.
I created example configuration to show you the whole process:
Steps to reproduce:
Apply Ingress definitions
Create deployments
Create services
Create Ingresses
Test
I assume that the cluster is provisioned and is working correctly.
Apply Ingress definitions
Follow this Ingress link to find if there are any needed prerequisites before installing Ingress controller on your infrastructure.
Apply below command to provide all the mandatory prerequisites:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Run below command to apply generic configuration to create a service:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
Create deployments
Below are 2 example deployments to respond to the Ingress traffic on specific services:
hello.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 5
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
Apply this first deployment configuration by invoking command:
$ kubectl apply -f hello.yaml
goodbye.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: goodbye
spec:
selector:
matchLabels:
app: goodbye
version: 2.0.0
replicas: 5
template:
metadata:
labels:
app: goodbye
version: 2.0.0
spec:
containers:
- name: goodbye
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
Apply this second deployment configuration by invoking command:
$ kubectl apply -f goodbye.yaml
Check if deployments configured pods correctly:
$ kubectl get deployments
It should show something like that:
NAME READY UP-TO-DATE AVAILABLE AGE
goodbye 5/5 5 5 2m19s
hello 5/5 5 5 4m57s
Create services
To connect to earlier created pods you will need to create services. Each service will be assigned to one deployment. Below are 2 services to accomplish that:
hello-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
Apply first service configuration by invoking command:
$ kubectl apply -f hello-service.yaml
goodbye-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: goodbye-service
spec:
type: NodePort
selector:
app: goodbye
version: 2.0.0
ports:
- name: goodbye-port
protocol: TCP
port: 50001
targetPort: 50001
Apply second service configuration by invoking command:
$ kubectl apply -f goodbye-service.yaml
Take in mind that in both configuration lays type: NodePort
Check if services were created successfully:
$ kubectl get services
Output should look like that:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
goodbye-service NodePort 10.0.5.131 <none> 50001:32210/TCP 3s
hello-service NodePort 10.0.8.13 <none> 50001:32118/TCP 8s
Create Ingresses
To achieve sticky sessions you will need to create 2 ingress definitions.
Definitions are provided below:
hello-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "hello-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
goodbye-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goodbye-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "goodbye-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /v2/
backend:
serviceName: goodbye-service
servicePort: goodbye-port
Please change DOMAIN.NAME in both ingresses to appropriate to your case.
I would advise to look on this Ingress Sticky session link.
Both Ingresses are configured to HTTP only traffic.
Apply both of them invoking command:
$ kubectl apply -f hello-ingress.yaml
$ kubectl apply -f goodbye-ingress.yaml
Check if both configurations were applied:
$ kubectl get ingress
Output should be something like this:
NAME HOSTS ADDRESS PORTS AGE
goodbye-ingress DOMAIN.NAME IP_ADDRESS 80 26m
hello-ingress DOMAIN.NAME IP_ADDRESS 80 26m
Test
Open your browser and go to http://DOMAIN.NAME
Output should be like this:
Hello, world!
Version: 1.0.0
Hostname: hello-549db57dfd-4h8fb
Hostname: hello-549db57dfd-4h8fb is the name of the pod. Refresh it a couple of times.
It should stay the same.
To check if another route is working go to http://DOMAIN.NAME/v2/
Output should be like this:
Hello, world!
Version: 2.0.0
Hostname: goodbye-7b5798f754-pbkbg
Hostname: goodbye-7b5798f754-pbkbg is the name of the pod. Refresh it a couple of times.
It should stay the same.
To ensure that cookies are not changing open developer tools (probably F12) and navigate to place with cookies. You can reload the page to check if they are not changing.
I think your Service configuration is wrong. Just remove type: LoadBalancer and the type will be ClusterIP by default.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created. See more here: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer.
apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.
Please, help me to deal with accessibility of my simple application.
I created YML with an application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-test
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Service
apiVersion: v1
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
- path: /hello
backend:
serviceName: myapp-service
servicePort: 80
Then I created k8s cluster via kops, like this, all services k8s have risen, I can enter the master:
kops create cluster \
--node-count = 2 \
--node-size = t2.micro \
--master-size = t2.micro \
--master-count = 1 \
--zones = us-east-1a \
--name = ${KOPS_CLUSTER_NAME}
In the end, I can't get to the application on port 80, it write's that the connection is refused!
Can someone tell me, what is the problem? This yml above fully works, but in the minikube environment(
Indeed you have created an Ingress resource, but I presume you have not deployed prior the NGINX Ingress Controller for your on-premise cluster on AWS. It's explained here on how to do this in general.
In case of Kubernetes cluster bootsrapped with Kops, things are more complex, and it requires you to modify an existing cluster, to use a dedicated kops add-on: kube-ingress-aws-controller, as explained on their github project page here
In current form your app can be reached only via Node/AWS Instance external IP on port assigned from default range (30000-32767). You can check currently assign port via kubectl get svc myapp-service), but this requires opening it first on firewall (default Inbound rules deny All traffic apart SSH). Based on you deploy/service manifest files:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp-service NodePort 100.64.187.80 <none> 80:32076/TCP 37m
with port 32076 open in inbound rules of Security Group assigned to my instance I can now reach app on NodePort:
curl <node_external_ip>:32076
Hostname: myapp-test-f87bcbd44-8nxpn
Pod Information:
-no pod information available-
Server values: