Please, help me to deal with accessibility of my simple application of k8s, via traefik in AWS.
I tried to expose ports 30000-32767 on master node, in security group and app is accessible from the world, doesn't want to work just 80 port of traefik! When I tried to expose 80 port in security group of master, I got CONNECTION REFUSED, when try access my app in browser and when I delete exposed port get an error CONNECTION TIMEOUT in browser.. what is the problem??? All services of k8s are up and no errors in traefik.
KOPS:
kops create cluster \
--node-count = 2 \
--networking calico \
--node-size = t2.micro \
--master-size = t2.micro \
--master-count = 1 \
--zones = us-east-1a \
--name = ${KOPS_CLUSTER_NAME}
K8S app.yml and traefik.yml:
app
https://pastebin.com/WtEe633x
traefik
https://pastebin.com/pnPJVPBP
When I will type myapp.com, want to get an output of echoserver app on 80 port.
You've set things up using a NodePort service:
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
# namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
This doesn't mean that that the service proxy will listen on port 80 from the PoV of the outside world. By default NodePort services automatically allocate their port at random. What you probably want to do is to use a LoadBalancer service instead. Check out https://github.com/Ridecell/kubernetes/blob/9e034f4d0fb38e49f808ae0852af74366f630d48/manifests/traefik.yml#L152-L171 for an example.
Omg, problem was the next.. I have illegal domain name, so I tried to register a new free legal domain on freenom.com. Set Amazon's NS records in domain settings, created hosted zone of new domain in R53, with alias A record to domain name of loadbalancer and it works! Also changed type: NodePort to type: LoadBalancer in service config of traefik.
Related
I have deployed AWS Load Balancer Controller on AWS EKS. I have created k8s Ingress resource
I am deploying java web application with k8s Deployment. I want to make sure sticky session holds to make my application work.
I have read that if I set below annotation then sticky sessions will work :
alb.ingress.kubernetes.io/target-type: ip
But I am seeing ingress is routing requests to different replica each time letting login fail as session cookies are not persisting.
What am I missing here ?
alb.ingress.kubernetes.io/target-type: ip is required.
but the annotation to enable stickiness is:
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true
Also you can set cookie_duration_settings.
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=300
If you want to manage the stick session from the K8s level you can use the, sessionAffinity: ClientIP
kind: Service
apiVersion: v1
metadata:
name: service
spec:
selector:
app: app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123m
nginx LoadBalancer 10.107.199.170 <pending> 8080:31579/TCP 45m52s
My create command is
kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer
I'm not sure what I'm doing wrong.
What I expect to see is an EXTERNAL-IP address given for the load balancer.
Has anyone had this and successfully solved it, please?
Thanks.
You need to setup the interface between k8s and AWS which is aws-cloud-provider-controller.
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
More details can be found:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/
https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using annotations.
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
Different settings can be applied to a load balancer service in AWS using annotations.
To Create K8s cluster on AWS using EC2, you need to consider some configuration to make it work as expected.
that's why your service is not exposed right with external IP.
you need to get the public IP of the EC2 instance that your cluster used it to deploy Nginx pod on it and then edit Nginx service to add external IP
kubectl edit service nginx
and that will prompt terminal to add external IP:
type: LoadBalancer
externalIPs:
- 1.2.3.4
where 1.2.3.4 is the public IP of the EC2 instance.
then make sure your security group inbound traffic allowed on your port (31579)
Now you are ready to user k8s service from any browser open: 1.2.3.4:31579
I hav a StatefulSet of 3 replicas of ejabberd. I have exposed them to GCP NEG using the following declaration:
apiVersion: v1
kind: Service
metadata:
name: ejabberd
annotations:
cloud.google.com/neg: '{"exposed_ports": {"5222":{"name": "ejabberd-xmpp-production-neg"}, "5443":{"name": "ejabberd-http-production-neg"}}}'
spec:
type: ClusterIP
clusterIP: None
selector:
app: ejabberd
ports:
- protocol: TCP
name: xmpp
port: 5222
targetPort: 5222
- protocol: TCP
name: http
port: 5443
targetPort: 5443
The health check is an HTTP endpoint at port 5443 that return HTTP 200 (OK) status code at path /.
The issue is that when i create a SSL Proxy the health check always fail but when i ssh into the pods and execute $ curl localhost:5443/ i get a success response. TCP health checks didn't work either.
The issue was caused by the Cloud Load Balancer not being able to access the ports. I fixed it by creating a firewall rule that allow the load balancer to access the specified port.
This achieved by running the following command:
gcloud compute firewall-rules create fw-allow-health-check-and-proxy \
--network=NETWORK_NAME \
--action=allow \
--direction=ingress \
--target-tags=GKE_NODE_NETWORK_TAGS \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--rules=tcp:5443,tcp:5222
More can be found here Attaching an external HTTP(S) load balancer to standalone NEGs
I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123m
nginx LoadBalancer 10.107.199.170 <pending> 8080:31579/TCP 45m52s
My create command is
kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer
I'm not sure what I'm doing wrong.
What I expect to see is an EXTERNAL-IP address given for the load balancer.
Has anyone had this and successfully solved it, please?
Thanks.
You need to setup the interface between k8s and AWS which is aws-cloud-provider-controller.
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
More details can be found:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/
https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using annotations.
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
Different settings can be applied to a load balancer service in AWS using annotations.
To Create K8s cluster on AWS using EC2, you need to consider some configuration to make it work as expected.
that's why your service is not exposed right with external IP.
you need to get the public IP of the EC2 instance that your cluster used it to deploy Nginx pod on it and then edit Nginx service to add external IP
kubectl edit service nginx
and that will prompt terminal to add external IP:
type: LoadBalancer
externalIPs:
- 1.2.3.4
where 1.2.3.4 is the public IP of the EC2 instance.
then make sure your security group inbound traffic allowed on your port (31579)
Now you are ready to user k8s service from any browser open: 1.2.3.4:31579
I deployed kong ingress controller on aws eks cluster with fargate option.
I am unable to access out application over the internet over http port.
I am keep getting -ERR_CONNECTION_TIMED_OUT in browser.
I did follow the Kong deployment as per steps given at -
https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/eks.md
Kong-proxy service is created wihtout issue.
kong-proxy service is created yet its “EXTERNAL-IP” is still showing pending.
We are able to access our local application in internal network (by logging on to running pod) via Kong-proxy CLUSTER-IP without any problem using curl.
A nlb load balancer is also created automatically in aws console when we created kong-proxy service. Its DNS name we are using to try to connect from internet.
Kindly help me understand what could be the problem.
My kong-proxy yaml is-
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
externalTrafficPolicy: Local
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 80
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 443
selector:
app: ingress-kong
type: LoadBalancer
I don't think it's supported now as per https://github.com/aws/containers-roadmap/issues/617