I'm using Nginx ingress controller in kubernetes cluster and its host address (load balancer DNS name) is returning "Could not resolve host: ..".
In AWS, I don't even have load balancer with this DNS name.
However, when I run kubectl logs on ingress controller, it's still receiving traffic normally.
How is it possible for ingress controller to have a host that can't be found and still receives traffic?
Nginx Ingress Controller service
-> kubectl -n ingress-nginx describe svc ingress-nginx-controller
...
LoadBalancer Ingress: xxx.elb.ap-northeast-2.amazonaws.com #deprecated
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31610/TCP
Endpoints: 10.0.51.53:80
Port: https 443/TCP
TargetPort: http/TCP
NodePort: https 32544/TCP
Endpoints: 10.0.51.53:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Warning SyncLoadBalancerFailed 32m (x6919 over 24d) service-controller (combined from similar events): Error syncing load balancer: failed to ensure load balancer: error creating load balancer listener: "TargetGroupAssociationLimit:
The following target groups cannot be associated with more than one load balancer: arn:aws:elasticloadbalancing:ap-northeast-2:4xxx:targetgroup/k8s-ingressn-ingressn-5f8ebc7e16/25aa0ef278298505\n\tstatus
code: 400, request id: d1767330-f7d1-4c4f-bcf1-4f1e4af8ab9f"
Normal
EnsuringLoadBalancer 2m40s (x6934 over 24d) service-controller Ensuring load balancer
-> curl xxx.elb.ap-northeast-2.amazonaws.com
curl: (6) Could not resolve host: xxx.elb.ap-northeast-2.amazonaws.com
Ingress using deprecated DNS
-> kubectl describe ingress slack-api-ingress
Name: slack-api-ingress
Namespace: default
Address: xxx.elb.ap-northeast-2.amazonaws.com #deprecated
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
...
I have another NLB in AWS which was initially provisioned by ingress controller and I can still access my services with this lb's DNS name.
I'm assuming that ingress controller tried to update and create a new load balancer, but still directs traffic to previous lb after failing to create new lb. If this is the case, I want ingress-controller to use previous lb's DNS.
I'm new to kubernetes and probably don't know what's going on so any advice is appreciated.
Related
I have installed nginx ingress controller of type NLB inside EKS cluster and it is of type internal.
The ingress controller created a network load balancer, with listeners 80 and 443,
with port 443 we can't attach an ssl cert for nlb type, only when I use listener type tls it is able to allow us to add ssl cert from AWS ACM.
Now the issue is, I am trying to expose a frontend application through this NLB nginx ingress controller,
when the NLB lister port is 443, it is able to access the application but complains with ssl cert (fake Kubernetes cert), when I change the listener from 443 to tls in NLB, it throws error "400 "The plain HTTP request was sent to HTTPS port" error"
Like many solutions out there mentioning changing the targetPort from https: https to https: http , I tried but with that too same error "The page isn't working,ERR_TOOMANY_REQUESTS"
Could anyone help me how to resolve this issue?
Any ideas or suggestions would be highly appreciated
To resolve the issue with the SSL certificate and the "400 "The plain HTTP request was sent to HTTPS port" error", you may need to modify your ingress configuration to specify that the ingress should listen for HTTPS traffic on port 443. This can be done by adding the following annotations to your ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
name: example
namespace: example
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example
port:
name: https
tls:
- hosts:
- example.com
secretName: example-tls
In the example above, nginx.ingress.kubernetes.io/ssl-redirect tells the ingress to redirect HTTP traffic to HTTPS. nginx.ingress.kubernetes.io/secure-backends tells the ingress to encrypt the traffic between the ingress and the backend services. `secret
I am new to aws.
I am tryin to deploy my application to aws eks, everything is created well, except for my caddy server service, it stuck at pending status when it tries to get external-ip.
When I describe the service this is the output:
Name: caddy
Namespace: default
Labels: app=caddy
Annotations: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
Selector: app=caddy
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.4.149
IPs: 10.100.4.149
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31064/TCP
Endpoints: 192.168.26.17:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 30707/TCP
Endpoints: 192.168.26.17:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 16m service-controller Ensuring load balancer
Warning FailedBuildModel 15m service Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Incorrect token audience
status code: 400, request id: dd76289e-ca16-48e5-8985-3a4fc1b64f43
Warning FailedBuildModel 7m49s service Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Incorrect token audience
status code: 400, request id: 62ed516f-c505-4bc8-979f-74edc449217e
I discovered that the problem was coming from the serviceAccount I have created, there was a a typo in the OIDC provider URI.
I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123m
nginx LoadBalancer 10.107.199.170 <pending> 8080:31579/TCP 45m52s
My create command is
kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer
I'm not sure what I'm doing wrong.
What I expect to see is an EXTERNAL-IP address given for the load balancer.
Has anyone had this and successfully solved it, please?
Thanks.
You need to setup the interface between k8s and AWS which is aws-cloud-provider-controller.
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
More details can be found:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/
https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using annotations.
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
Different settings can be applied to a load balancer service in AWS using annotations.
To Create K8s cluster on AWS using EC2, you need to consider some configuration to make it work as expected.
that's why your service is not exposed right with external IP.
you need to get the public IP of the EC2 instance that your cluster used it to deploy Nginx pod on it and then edit Nginx service to add external IP
kubectl edit service nginx
and that will prompt terminal to add external IP:
type: LoadBalancer
externalIPs:
- 1.2.3.4
where 1.2.3.4 is the public IP of the EC2 instance.
then make sure your security group inbound traffic allowed on your port (31579)
Now you are ready to user k8s service from any browser open: 1.2.3.4:31579
I have a workload in GKE cluster and I need to expose one port with both TCP and UDP protocols externally. The complication is that egress and ingress should go through the same external IP in order to make P2P protocol working.
Currently, my cluster is public and I use a trick with hostNetwork: true described here https://stackoverflow.com/a/47887571/803403, but considering moving to a private cluster and using Cloud NAT. However, I did not find a way how to expose that port in this case. I tried to expose it via ClusterIP, but in firewall rules could not map the external port to that ClusterIP port since the last one does not have network tags. And also I'm not sure if firewall rules can be applied to Cloud Router that is bonded to Cloud NAT.
Any ideas?
You are in a dead end! Today you expose your service through a public IP of one of your node. If you go private, you will no longer have a public IP, only private IP. Thus, you need something that bridge the private world and the public internet: a Load balancer
However, multiprotocol on the same IP (here TCP and UDP) isn't natively supported by Google Load balancer, and you can't use Load Balancer.
No luck...
Note: I know there are updates in progress on Google Cloud internal network side, but that's all. I don't know exactly what and if a new type of load balancer will be released or not. Maybe... stay tune, but it won't be en the next weeks
You can
create a gcloud compute address
create a LoadBalancer service that listens on your TCP port(s)
create a second LoadBalancer service that listens on your UDP port(s)
assign the glcoud compute IP address to both LoadBalancer services using spec.loadBalancerIp
Make sure the IP and GKE services are in the same glcoud project and region.
apiVersion: v1
kind: Service
metadata:
name: service-tcp
labels:
app: nginx
spec:
ports:
- protocol: TCP
port: 80
selector:
app: nginx
type: LoadBalancer
loadBalancerIP: 1.2.3.4
---
apiVersion: v1
kind: Service
metadata:
name: service-udp
labels:
app: nginx
spec:
ports:
- protocol: UDP
port: 80
selector:
app: nginx
type: LoadBalancer
loadBalancerIP: 1.2.3.4
I have a cluster created using eksctl and also valid certificates created under ACM, I have used DNS method to verify the domain ownership and its succesfully completed.
below are the details i see when executing kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
eks-learning-ingress my-host.com b29c58-production-91306872.us-east-1.elb.amazonaws.com 80 18h
when i access the application using https://b29c58-production-91306872.us-east-1.elb.amazonaws.com, i see it load the application with a security warning because that not the domain name with which the certifcates are created. When i try to execute https://my-host.com i am getting a timeout.
I have 2 questions
1) I am using CNAMES to point my domain to AWS ELB, the values i added for CNAME are
name: my-host.com, points to: b29c58-production-91306872.us-east-1.elb.amazonaws.com. Is this correct?
2) below is my ingress resource defination, may be i am missing something as requests are not coming in to the application
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: eks-learning-ingress
namespace: production
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:dseast-1:255982529496:sda7-a148-2d878ef678df
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}, {"HTTP": 8080}, {"HTTPS": 8443}]'
labels:
app: eks-learning-ingress
spec:
rules:
- host: my-host.com
http:
paths:
- path: /*
backend:
serviceName: eks-learning-service
servicePort: 80
Any help would be really great. Thanks.
My go-to solution is using an A-record in Route 53. Instead of adding an IP, you select the "alias" option and select your load balancer. Amazon will handle the rest.
I think you have a port mismatch. https:// will use port 443, not port 80, but your ingress appears to be accepting requests on port 80 rather than 443.
If 443 was configured I'd expect to see it listed under ports as 80, 443
Can you verify with telnet or nc or curl -vvvv that your ingress is actually accepting requests on port 443? If it is, check the response body reported by curl - it should give you some indication as to why the request is not propagating downwards to your service.
We use nginx-ingress so unfortunately I can't look at local ingress config and compare it to yours.