I can't use the external IP of the GKE I deployed it a success by Jenkins and below is:
when i run " kubectl get service":
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello LoadBalancer 10.92.14.31 34.170.30.56 8080:31110/TCP 2d21h
I checked my deployment.yaml and i think no problem with it bellow is file:
spec:
containers:
- name: hello
image: azmassage/hello:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
name: hello
---
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
ports:
- protocol: TCP
port: 8080
nodePort: 31110
selector:
app: hello
tier: hello
type: LoadBalancer
Footer
and I think this is the problem with firewalls after I create the firewall rule:
I can't connect and use it bellow is ms test:
admin_#cloudshell:~$ curl http://34.170.30.56:8080
curl: (7) Failed to connect to 34.170.30.56 port 8080: Connection refused
I'm not sure, but maybe make sense to open 31110 port on the firewall and check it with curl?
Related
In the documentation of Nginx Ingress for AWS it says:
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer.
Link: https://kubernetes.github.io/ingress-nginx/deploy/#tls-termination-in-aws-load-balancer-nlb
So, I follow the instructions: set AWS ACM certification, set VPC CIDR and deploy.
Then check ingress nginx service:
kubectl get service --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.124.56 adba41948av49484z55137c374e1e17d-09af54e014df8676.elb.us-east-1.amazonaws.com 80:31985/TCP,443:32330/TCP 17h
ingress-nginx-controller-admission ClusterIP 10.100.175.52 <none> 443/TCP 17h
In the AWS console, the Load Balancer has necessary certificate and all seems fine.
Next, I create Ingress rules and Service with type: ClusterIP
Service:
apiVersion: v1
kind: Service
metadata:
name: test-app-service
spec:
selector:
name: test-app-pod
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-app-ingress
spec:
ingressClassName: nginx
rules:
- host: foobar.com # forwards to LB
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: test-app-service
port:
number: 80
Check the Ingress:
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
test-app-ingress nginx foobar.com adba41948av49484z55137c374e1e17d-09af54e014df8676.elb.us-east-1.amazonaws.com 80 29m
So I am just stuck here.
When I go to http://foobar.com -- it works perfectly fine.
But when I go to https://foobar.com - it says 'Could not resolve host: foobar.com'
And I would expect that when I go to https://foobar.com then it terminates TLS on LB and sends the traffic to the service.
I have also found an article, where it is described the same, and in comments there are the same questions like I have, so I am not the only one :D : https://habeeb-umo.medium.com/using-nginx-ingress-in-eks-with-tls-termination-in-a-network-load-balancer-1783fc92935 (I followed this instructions also - no luck as well)
UPDATE:
as per #mdaniel
When I do curl -v http://foobar.com and curl -v https://foobar.com - it both says Could not resolve host: foobar.com:
http:
* Could not resolve host: foobar.com
* Closing connection 0
curl: (6) Could not resolve host: foobar.com
https:
* Could not resolve host: foobar.com
* Closing connection 0
curl: (6) Could not resolve host: foobar.com
And in the browser when I go to http://foobar.com - it opens the page OK, BUT when I refresh the page it shows 'This site can’t be reached'.
UPDATE2:
I think I have found an issue.
I used httpd container inside the pod and opened 8080 port
spec:
containers:
- name: some-app
image: httpd:2.4.54
ports:
- containerPort: 8080
So when I do port-forward
kubectl port-forward test-app-deployment-f59984d85-qckr9 8081:8080
The first GET request http://127.0.0.1:8081 is fine, but after another one - it fails
Forwarding from 127.0.0.1:8081 -> 8080
Forwarding from [::1]:8081 -> 8080
Handling connection for 8081
E1124 11:48:43.466682 94768 portforward.go:406] an error occurred forwarding 8081 -> 8080: error forwarding port 8080 to pod d79172ed802e00f93a834aab7b89a0da053dba00ad327d71fff85f582da9819e, uid : exit status 1: 2022/11/24 10:48:43 socat[15820] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
So I changed containerPort to 80 and it helped:
spec:
containers:
- name: some-app
image: httpd:2.4.54
ports:
- containerPort: 80 # changed port to 80
Run port forwarding: kubectl port-forward test-app-deployment-f59984d85-qckr9 8081:80
Make 3 GET requests http://127.0.0.1:8081
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80
Handling connection for 8081
Handling connection for 8081
Handling connection for 8081
I have a running private Kubernetes Cluster (v1.20) with fargate instances for the pods on the complete cluster. Access is restricted to the nodePorts range and 443.
I use externalDns to create Route53 entries to route from my internal network to the Kubernetes cluster. Everything works fine with e.g. UIs listening on port 443.
So now I want to use the kubernetes-dashboard not via proxy to my localhost but via DNS resolution with Route53. For this, I made two changes in the kubernetes-dashboard.yml:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
is now:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
external-dns.alpha.kubernetes.io/hostname: kubernetes-dashboard.hostname.local
spec:
ports:
- port: 443
targetPort: 8443
externalTrafficPolicy: Local
type: NodePort
selector:
k8s-app: kubernetes-dashboard
and the container specs now contain the self-signed certificates:
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --tls-cert-file=/tls.crt
- --tls-key-file=/tls.key
The certificates (the same used for the UIs) are mapped via kubernetes secret.
The subnets for the fargate instances are the same as for my other applications. Yet i receive a "Connection refused" when i try to call my dashboard.
Checking the dashboard with the localproxy the configuration setup seems fine. Also the DNS entry is resolved to the correct IP address of the fargate instance.
My problem here is, that I already use this setup and it works. But I see no difference to the dashboard here. Do I miss something?
Can anybody help me out here?
Greetings,
Eric
I have created a cluster on AWS EC2 using kops consisting of a master node and two worker nodes, all with public IPv4 assigned.
Now, I want to create a deployment with a service using NodePort to expose the application to the public.
After having created the service, I retrieve the following information, showing that it correctly identified my three pods:
nlykkei:~/projects/k8s-examples$ kubectl describe svc hello-svc
Name: hello-svc
Namespace: default
Labels: app=hello
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hello"},"name":"hello-svc","namespace":"default"},"spec"...
Selector: app=hello-world
Type: NodePort
IP: 100.69.62.27
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30001/TCP
Endpoints: 100.96.1.5:8080,100.96.2.3:8080,100.96.2.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
However, when I try to visit any of my public IPv4's on port 30001, I get no response from the server. I have already created a Security Group allowing all ingress traffic to port 30001 for all of the instances.
Everything works with Docker Desktop for Mac, and here I notice the following service field not present in the output above:
LoadBalancer Ingress: localhost
I've already studied https://kubernetes.io/docs/concepts/services-networking/service/, and think that NodePort should serve my needs?
Any help is appreciated!
So you want to have a service able to be accessed from public. In order to achieve this I would recommend to create a ClusterIP service and then an Ingress for that service. So, saying that you have the deployment hello-world serving at 8081 you will then have the following two objects:
Service:
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- name: service
port: 8081(or whatever you want)
protocol: TCP
targetPort: 8080 (here goes the opened port in your pods)
selector:
app: hello-world
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app: hello-world
name: hello-world
spec:
rules:
- host: hello-world.mycutedomainname.com
http:
paths:
- backend:
serviceName: hello-world
servicePort: 8081 (or whatever you have set for the service port)
path: /
Note: the name tag in the service's port is optional.
I am using AWS EKS.
I have launched my django app with help of gunicorn in kubernetes cluster.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
labels:
app: api
type: web
spec:
replicas: 1
template:
metadata:
labels:
app: api
type: web
spec:
containers:
- name: vogofleet
image: xx.xx.com/api:image2
imagePullPolicy: Always
env:
- name: DATABASE_HOST
value: "test-db-2.xx.xx.xx.xx.com"
- name: DATABASE_PASSWORD
value: "xxxyyyxxx"
- name: DATABASE_USER
value: "admin"
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_NAME
value: "test"
ports:
- containerPort: 9000
I have applied these changes and I can see my pod running in kubectl get pods
Now, I am trying to expose it via service object. Here is my service object,
# service
---
apiVersion: v1
kind: Service
metadata:
name: api
labels:
app: api
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: api
type: web
type: LoadBalancer
The service is also up and running. It has given me the external IP to access the service, which is the address of the load balancer. I can see that it has launched a new load balancer in the AWS console. But I am not able to access it from browser. It says that address didn't return any data. The ELB is showing the healthcheck on instances as OutOfService.
There are other pods also running in the cluster. When I run printenv in those pods, here is the result,
root#consumer-9444cf7cd-4dr5z:/consumer# printenv | grep API
API_PORT_9000_TCP_ADDR=172.20.140.213
API_SERVICE_HOST=172.20.140.213
API_PORT_9000_TCP_PORT=9000
API_PORT=tcp://172.20.140.213:9000
API_PORT_9000_TCP=tcp://172.20.140.213:9000
API_PORT_9000_TCP_PROTO=tcp
API_SERVICE_PORT=9000
And I tried to check connection to my api pod,
root#consumer-9444cf7cd-4dr5z:/consumer# telnet $API_PORT_9000_TCP_ADDR $API_PORT_9000_TCP_PORT
Trying 172.20.140.213...
telnet: Unable to connect to remote host: Connection refused
But, when I do port-forward to my localhost, I can access it on my localhost,
$ kubectl port-forward api-6d94dcb65d-br6px 9000
and check the connection,
$ nc -vz localhost 9000
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src ::1 port 53299
dst ::1 port 9000
rank info not available
TCP aux info available
Connection to localhost port 9000 [tcp/cslistener] succeeded!
Why am I not able to access it from other containers and from public internet? And, The security groups are correct.
I have the same problem. Here's the o/p of kubectl describe service command.
kubectl describe services nginx-elb
Name: nginx-elb
Namespace: default
Labels: deploy=slido
Annotations: service.beta.kubernetes.io/aws-load-balancer-internal: true
Selector: deploy=slido
Type: LoadBalancer
IP: 10.100.29.66
LoadBalancer Ingress: internal-a2d259057e6f94965bfc1f08cf86d4ce-884461987.us-west-2.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 3000/TCP
NodePort: http 32582/TCP
Endpoints: 192.168.60.119:3000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 117s service-controller Ensured load balancer
I have a LoadBalancer service on a k8s deployment on aws (made via kops).
Service definition is as follows:
apiVersion: v1
kind: Service
metadata:
name: ui
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <certificate_id>
spec:
ports:
- name: http
port: 80
targetPort: ui-port
protocol: TCP
- name: https
port: 443
targetPort: ui-port
protocol: TCP
selector:
els-pod: ui
type: LoadBalancer
Here is the respective deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ui-deployment
spec:
replicas: 1
template:
metadata:
labels:
els-pod: ui
spec:
containers:
- image: <my_ecr_registry>/<my_image>:latest
name: ui
ports:
- name: ui-port
containerPort: 80
restartPolicy: Always
I know that <my_image> exposes port 80.
I have also assigned an alias to the ELB that gets deployed, say. my-k8s.mydomain.org
Here is the issue:
https://my-k8s.mydomain.org works just fine
http://my-k8s.mydomain.org returns an empty page (when accessing behind a squid proxy, I get the zero-sized reply error message)
Why am I unable to access the service via port 80?
edit: what I have just found is that the service annotation regarding the certificate, also assigns it on port 80 on the ELB.
Could that be the issue?
Is there a way around this?
Just needed to add the following annotation in the service definition:
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"