I've tried to get access via external ip from the load balancer. But I can't connect with my browser through that ip.
I've got a connection refused on port 80. I don't know if my yaml file is incorrect or its a config on my load balancer
I builded my docker image successfully with my requirements.txt and load it to the bucket from GKE to pull the docker image into Kubernetes.
I deployed my image with the command:
kubectl create -f <filename>.yaml
with following yaml file:
# [START kubernetes_deployment]
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: example
labels:
app: example
spec:
replicas: 1
template:
metadata:
labels:
app: example
spec:
containers:
- name: faxi
image: gcr.io/<my_id_stands_here>/<bucketname>
command: ["python3", "manage.py", "runserver"]
env:
# [START cloudsql_secrets]
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: cloudsql
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql
key: password
# [END cloudsql_secrets]
ports:
- containerPort: 8080
# [START proxy_container]
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=<I put here my instance name inside>",
"-credential_file=<my_credential_file"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
# [END volumes]
# [END kubernetes_deployment]
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: example
labels:
app: example
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: example
# [END service]
It works fine
I've got the Output below with kubectl get pods
NAME READY STATUS RESTARTS AGE
mypodname. 2/2 Running 0 21h
and with kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example LoadBalancer 10.xx.xxx.xxx 34.xx.xxx.xxx 80:30833/TCP 21h
kubernetes ClusterIP 10.xx.xxx.x <none> 443/TCP 23h
kubectl describe services example
gives me following output
Name: example
Namespace: default
Labels: app=example
Annotations: <none>
Selector: app=example
Type: LoadBalancer
IP: 10.xx.xxx.xxx
LoadBalancer Ingress: 34.xx.xxx.xxx
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30833/TCP
Endpoints: 10.xx.x.xx:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
But I can't connect to my rest api via curl or browser
I try to connect to <external_ip>:80and get a connection refused on this port
I scanned the external ip with Nmap and it shows that port 80 is closed. But Im not sure If thats the reason.
My nmap output:
PORT STATE SERVICE
80/tcp closed http
554/tcp open rtsp
7070/tcp open realserver
Thank you for your help guys
Please create an ingress firewall rule to allow traffic via port 30833 for all the nodes, as it should resolve the issue.
Related
I am trying to connect to an Nginx server with Minikube, through a NodePort-type service, running on Amazon Linux 2, to obtain the Nginx server welcome page. But when I try to I have this error:
[root#ip-172-31-8-201 ~]# minikube ip
! Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 2.137525008s
* Restarting the docker service may improve performance.
192.168.49.2
[root#ip-172-31-8-201 ~]# curl 192.168.49.2:30123
curl: (7) Failed to connect to 192.168.49.2 port 30123: Connection refused
I have a file svcNodeport.yaml whcih consists of this:
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30123
selector:
app: nginx
I have a file pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 8080
protocol: TCP
svc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: nginx
last file pod4svc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 1
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
Any suggestion is helpful.
I have a GKE cluster, a static ip, and a container/pod which exposes 2 ports: 8081(UI https) and 8082 (WSS https). I must connect to the "UI Https" and "WSS Https" on the same IP. The "WSS Https" service does not have a health check endpoint.
Do i need to use Isito, Consul, Nginx ingress or some service mesh to allow these connections on the same IP with different ports?
Is this even possible?
Things i have tried:
GCP global lb with 2 independent ingress services. The yaml for the second service never works correctly but i can add another backed service via the UI. The ingress always reverts to the default health check for the "WSS Https" service and it always unhealthy.
Changed Service type from NodePort to LoadBalancer with a static ip. This will not allow me to change the readiness check and always reverts back.
GCP GLIB with 1 ingress and 2 backend gives me the same healthcheck failure as above
TCP Proxy - Does not allow me to set the same instance group.
Below are my ingress, service, and deployment.
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: myappnamespace
annotations:
kubernetes.io/ingress.global-static-ip-name: global-static-ip-name
labels:
app: appname
spec:
backend:
serviceName: ui-service
servicePort: 8081
tls:
- hosts:
- my-host-name.com
secretName: my-secret
rules:
- host: my-host-name.com
http:
paths:
- backend:
serviceName: ui-service
servicePort: 8081
- host: my-host-name.com
http:
paths:
- backend:
serviceName: app-service
servicePort: 8082
Services
---
apiVersion: v1
kind: Service
metadata:
labels:
name: ui-service
name: ui-service
namespace: myappnamespace
annotations:
cloud.google.com/app-protocols: '{"ui-https":"HTTPS"}'
beta.cloud.google.com/backend-config: '{"ports":{"8081":"cloud-armor"}}'
spec:
selector:
app: appname
ports:
- name: ui-https
port: 8081
targetPort: "ui"
protocol: "TCP"
selector:
name: appname
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
labels:
name: app-service
name: app-service
namespace: myappnamespace
annotations:
cloud.google.com/app-protocols: '{"serviceport-https":"HTTPS"}'
beta.cloud.google.com/backend-config: '{"ports":{"8082":"cloud-armor"}}'
spec:
selector:
app: appname
ports:
- name: serviceport-https
port: 8082
targetPort: "service-port"
protocol: "TCP"
selector:
name: appname
type: NodePort
---
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: appname
namespace: myappnamespace
labels:
name: appname
spec:
replicas:1
selector:
matchLabels:
name: appname
strategy:
type: Recreate
template:
metadata:
name: appname
namespace: appnamespace
labels:
name: appname
spec:
restartPolicy: Always
serviceAccountName: myserviceaccount
containers:
- name: my-container
image: image
ports:
- name: service-port
containerPort: 8082
- name: ui
containerPort: 8081
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/health
port: 8081
scheme: HTTPS
livenessProbe:
exec:
command:
- cat
- /version.txt
[......]
A Service exposed through an Ingress must respond to health checks from the load balancer.
External HTTP(S) Load Balancer that GKE Ingress creates only supports port 443 for https traffic.
In that case you may want to:
Use two separate Ingress resources to route traffic for two different host names on the same IP address and port:
ui-https.my-host-name.com
wss-https.my-host-name.com
Opt to use Ambassador or Istio Virtual Service.
Try Multi-Port Services.
Please let me know if that helped.
I'm struggling with Kubernetes' service without a selector. The cluster is installed on AWS with the kops. I have a deployment with 3 nginx pods exposing port 80:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngix-dpl # Name of the deployment object
labels:
app: nginx
spec:
replicas: 3 # Number of instances in the deployment
selector: # Selector identifies pods to be
matchLabels: # part of the deployment
app: nginx # by matching of the label "app"
template: # Templates describes pods of the deployment
metadata:
labels: # Defines key-value map
app: nginx # Label to be recognized by other objects
spec: # as deployment or service
containers: # Lists all containers in the pod
- name: nginx-pod # container name
image: nginx:1.17.4 # container docker image
ports:
- containerPort: 80 # port exposed by container
After creation of the deployment, I noted the IP addresses:
$ kubectl get pods -o wide | awk {'print $1" " $3" " $6'} | column -t
NAME STATUS IP
curl Running 100.96.6.40
ngix-dpl-7d6b8c8944-8zsgk Running 100.96.8.53
ngix-dpl-7d6b8c8944-l4gwk Running 100.96.6.43
ngix-dpl-7d6b8c8944-pffsg Running 100.96.8.54
and created a service that should serve the IP addresses:
apiVersion: v1
kind: Service
metadata:
name: dummy-svc
labels:
app: nginx
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: dummy-svc
subsets:
- addresses:
- ip: 100.96.8.53
- ip: 100.96.6.43
- ip: 100.96.8.54
ports:
- port: 80
name: http
The service is successfully created:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dummy-svc ClusterIP 100.64.222.220 <none> 80/TCP 32m
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 5d14h
Unfortunately, my attempt to connect to the nginx through the service from another pod of the same namespace fails:
$ curl 100.64.222.220
curl: (7) Failed to connect to 100.64.222.220 port 80: Connection refused
I can successfully connect to the nginx pods directly:
$ curl 100.96.8.53
<!DOCTYPE html>
<html>
<head>
....
I noticed that my service does not have any endpoints. But I'm not sure that the manual endpoints should be shown there:
$ kubectl get svc/dummy-svc -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"dummy-svc","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}]}}
creationTimestamp: "2019-11-22T08:41:29Z"
labels:
app: nginx
name: dummy-svc
namespace: default
resourceVersion: "4406151"
selfLink: /api/v1/namespaces/default/services/dummy-svc
uid: e0aa9d01-0d03-11ea-a19c-0a7942f17bf8
spec:
clusterIP: 100.64.222.220
ports:
- port: 80
protocol: TCP
targetPort: 80
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I understand that it is not a proper use case for services and using of a pod selector will bring it to work. But I want to understend why this configuration does not work. I don't know where to look for the solution. Any hint will be appreciated.
it works if you remove the "name" field from the endpoints configuration. it should look like this:
apiVersion: v1
kind: Endpoints
metadata:
name: dummy-svc
subsets:
- addresses:
- ip: 172.17.0.4
- ip: 172.17.0.5
- ip: 172.17.0.6
ports:
- port: 80
As #iliefa mentioned in his comment above, the below part of the definition is treated as labels in this type of cases.
ports:
- port: 80
name: http
In your scenario, we need to either remove 'name: http' as mentioned by #iliefa or we need to add 'name: http' under 'ports:' in the service definition as you can spot below.
apiVersion: v1
kind: Service
metadata:
name: dummy-svc
labels:
app: nginx
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
name: http
I have two projects in GCP viz. project1 and project2
I have setup mysql instance in project1.
I have also setup cloudsqlproxy (pod) and mypod in a GKE cluster in project2.
I want to access mysql instance frommypodthroughcloudsqlproxy`.
I have the following code for cloudmysqlproxy
apiVersion: v1
kind: Service
metadata:
name: cloudsqlproxy-service-mainnet
namespace: dev
spec:
ports:
- port: 3306
targetPort: port-mainnet
selector:
app: cloudsqlproxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudsqlproxy
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: cloudsqlproxy
template:
metadata:
labels:
app: cloudsqlproxy
spec:
containers:
# Make sure to specify image tag in production
# Check out the newest version in release page
# https://github.com/GoogleCloudPlatform/cloudsql-proxy/releases
- name: cloudsqlproxy
image: b.gcr.io/cloudsql-docker/gce-proxy:latest
# 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
imagePullPolicy: Always
name: cloudsqlproxy
command:
- /cloud_sql_proxy
- -dir=/cloudsql
- -instances=project1:asia-east1:development=tcp:0.0.0.0:3306
- -credential_file=/credentials/credentials.json
ports:
- name: port-mainnet
containerPort: 3306
volumeMounts:
- mountPath: /cloudsql
name: cloudsql
- mountPath: /credentials
name: cloud-sql-client-account-token
volumes:
- name: cloudsql
emptyDir:
- name: cloud-sql-client-account-token
secret:
secretName: cloud-sql-client-account-token
I have setup cloud-sql-client-account-token in the following manner:
kubectl create secret cloud-sql-client-account-token --from-file=credentials.json=$HOME/credentials.json
Where I downloaded the credentials.json file from a service account in project1.
When I try to access the mysql instance from my pod, I get the folløwing error:
mysql --host=cloudsqlproxy-service-mainnet.dev.svc.cluster.local --port=3306
ERROR 1045 (28000): Access denied for user 'root'#'cloudsqlproxy~35.187.201.86' (using password: NO)
And the cloud-proxy logs, I get the following:
2018/11/25 00:35:31 Instance project1:asia-east1:development closed connection
Is it necessary to launch a mysql instance in the same project (project2) as the pod? What am I missing?
EDIT
I can access the proxy on my local machine by setting up like this
/cloud_sql_proxy -instances=project1:asia-east1:development=tcp:3306
and then connecting to the proxy using a mysql client.
I have 6 HTTP micro-services. Currently they run in a crazy bash/custom deploy tools setup (dokku, mup).
I dockerized them and moved to kubernetes on AWS (setup with kop). The last piece is converting my nginx config.
I'd like
All 6 to have SSL termination (not in the docker image)
4 need websockets and client IP session affinity (Meteor, Socket.io)
5 need http->https forwarding
1 serves the same content on http and https
I did 1. SSL termination setting the service type to LoadBalancer and using AWS specific annotations. This created AWS load balancers, but this seems like a dead end for the other requirements.
I looked at Ingress, but don't see how to do it on AWS. Will this Ingress Controller work on AWS?
Do I need an nginx controller in each pod? This looked interesting, but I'm not sure how recent/relevant it is.
I'm not sure what direction to start in. What will work?
Mike
You should be able to use the nginx ingress controller to accomplish this.
SSL termination
Websocket support
http->https
Turn off the http->https redirect, as described in the link above
The README walks you through how to set it up, and there are plenty of examples.
The basic pieces you need to make this work are:
A default backend that will respond with 404 when there is no matching Ingress rule
The nginx ingress controller which will monitor your ingress rules and rewrite/reload nginx.conf whenever they change.
One or more ingress rules that describe how traffic should be routed to your services.
The end result is that you will have a single ELB that corresponds to your nginx ingress controller service, which in turn is responsible for routing to your individual services according to the ingress rules specified.
There may be a better way to do this. I wrote this answer because I asked the question. It's the best I could come up with Pixel Elephant's doc links above.
The default-http-backend is very useful for debugging. +1
Ingress
this creates an endpoint on the node's IP address, which can change depending on where the Ingress Container is running
note the configmap at the bottom. Configured per environment.
(markdown placeholder because no ```)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: all-ingress
spec:
tls:
- hosts:
- admin-stage.example.io
secretName: tls-secret
rules:
- host: admin-stage.example.io
http:
paths:
- backend:
serviceName: admin
servicePort: http-port
path: /
---
apiVersion: v1
data:
enable-sticky-sessions: "true"
proxy-read-timeout: "7200"
proxy-send-imeout: "7200"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
App Service and Deployment
the service port needs to be named, or you may get "upstream default-admin-80 does not have any active endpoints. Using default backend"
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: admin
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
selector:
app: admin
sessionAffinity: ClientIP
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: admin
spec:
replicas: 1
template:
metadata:
labels:
app: admin
name: admin
spec:
containers:
- image: example/admin:latest
name: admin
ports:
- containerPort: 80
name: http-port
resources:
requests:
cpu: 500m
memory: 1000Mi
volumeMounts:
- mountPath: /etc/env-volume
name: config
readOnly: true
imagePullSecrets:
- name: cloud.docker.com-pull
volumes:
- name: config
secret:
defaultMode: 420
items:
- key: admin.sh
mode: 256
path: env.sh
- key: settings.json
mode: 256
path: settings.json
secretName: env-secret
Ingress Nginx Docker Image
note default-ssl-certificate at bottom
logging is great -v below
note the Service will create an ELB on AWS which can be used to configure DNS.
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
- name: https-port
port: 443
protocol: TCP
targetPort: https-port
selector:
app: nginx-ingress-service
sessionAffinity: None
type: LoadBalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http-port
containerPort: 80
hostPort: 80
- name: https-port
containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --default-ssl-certificate=default/tls-secret
- --nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
- --v=2
Default Backend (this is copy/paste from .yaml file)
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
This config uses three secrets:
tls-secret - 3 files: tls.key, tls.crt, dhparam.pem
env-secret - 2 files: admin.sh and settings.json. Container has start script to setup environment.
cloud.docker.com-pull