Cant get access to service - google-cloud-platform

I have the second problem: cloud sql proxy pod wrapped onto service and must provide access to database.
And I have a job which must create new database for every branch.
But when this job runs the second error appears. I cant get access to the cloudsql-proxy-service.
I cant understand why its happens. Thanks.
E psql: could not connect to server: Connection timed out
E Is the server running on host "cloudsql-proxy-service"
(10.43.254.123) and accepting
E TCP/IP connections on port 5432?
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudsql-proxy
labels:
type: backend
name: app
annotations:
"helm.sh/created": {{ .Release.Time.Seconds | quote }}
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-20"
spec:
replicas: 1
selector:
matchLabels:
name: cloudsql-proxy
template:
metadata:
labels:
name: cloudsql-proxy
spec:
containers:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command:
- "/cloud_sql_proxy"
- "-instances={{ .Values.testDatabaseInstanceConnectionName }}=tcp:5432"
- "-credential_file=/secrets/cloudsql/credentials.json"
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
ports:
- containerPort: 5432
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: {{ .Values.cloudSqlProxySecretName }}
---
apiVersion: v1
kind: Service
metadata:
name: cloudsql-proxy-service
labels:
type: backend
name: app
annotations:
"helm.sh/created": {{ .Release.Time.Seconds | quote }}
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-20"
spec:
selector:
name: cloudsql-proxy
ports:
- port: 5432
apiVersion: batch/v1
kind: Job
metadata:
name: create-test-database
labels:
type: backend
name: app
annotations:
"helm.sh/created": {{ .Release.Time.Seconds | quote }}
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-10"
spec:
template:
metadata:
name: create-test-database
spec:
containers:
- name: postgres-client
image: kalumkalac/postgresql-client
env:
- name: PGUSER
value: {{ .Values.testDatabaseCredentials.username }}
- name: PGPASSWORD
value: {{ .Values.testDatabaseCredentials.password }}
- name: PGDATABASE
value: {{ .Values.testDatabaseCredentials.defaultDatabaseName }}
- name: PGHOST
value: cloudsql-proxy-service
command:
- psql
- -q
- -c CREATE DATABASE {{ .Values.testDatabaseCredentials.name|quote }}
restartPolicy: Never
backoffLimit: 0 # Deny retry job

Related

Service can't be registered in Target groups

I'm new to using Kubernetes and AWS so there are a lot of concepts I may not understand. I hope you can help me with this problem I am having.
I have 3 services, frontend, backend and auth each with their corresponding nodeport and an ingress that maps the one host to each service, everything is running on EKS and for the ingress deployment I am using AWS ingress controller. Once everything is deployed I try to register the node-group in the targets the frontend and auth services work correctly but backend stays in unhealthy state. I thought it could be a port problem but if you look at auth and backend they have almost the same deployment defined and both are api created with dotnet core. One thing to note is that I can do kubectl port-forward <backend-pod> 80:80 and it is running without problems. And when I run the kubectl describe ingresses command I get this:
Name: ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: default
Address: xxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxx.elb.amazonaws.com
Ingress Class: \<none\>
Default backend: \<default\>
Rules:
Host Path Backends
----------------
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
alb.ingress.kubernetes.io/listen-ports: \[{"HTTPS":443}, {"HTTP":80}\]
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: 443
kubernetes.io/ingress.class: alb
Events:
Type Reason Age From Message
-------------------------
Normal SuccessfullyReconciled 8m20s (x15 over 41h) ingress Successfully reconciled
Frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: front
labels:
name: front
spec:
replicas: 2
selector:
matchLabels:
name: front
template:
metadata:
labels:
name: front
spec:
containers:
- name: frontend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: wrfront-{{ .Values.namespace }}-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
name: default-port
protocol: TCP
selector:
name: front
---
Auth
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-wrauth-keys
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: "v1"
kind: "ConfigMap"
metadata:
name: "auth-config-ocpm"
labels:
app: "auth"
data:
ASPNETCORE_URL: "http://+:80"
ASPNETCORE_ENVIRONMENT: "Development"
ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS: "true"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "auth"
labels:
app: "auth"
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: "auth"
template:
metadata:
labels:
app: "auth"
spec:
volumes:
- name: auth-keys-storage
persistentVolumeClaim:
claimName: pvc-wrauth-keys
containers:
- name: "api-auth"
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: auth-keys-storage
mountPath: "/app/auth-keys"
env:
- name: "ASPNETCORE_URL"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_URL"
name: "auth-config-ocpm"
- name: "ASPNETCORE_ENVIRONMENT"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_ENVIRONMENT"
name: "auth-config-ocpm"
- name: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
name: "auth-config-ocpm"
---
apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
type: NodePort
selector:
app: auth
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Backend (Service with problem)
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
selector:
name: backend
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: {{ .Values.certificate }}
spec:
rules:
- host: {{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: front-service
port:
name: default-port
pathType: Prefix
- host: back.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: backend-service
port:
name: default-port
pathType: Prefix
- host: auth.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: auth-service
port:
name: default-port
pathType: Prefix
I've tried to deploy other services and they work correctly, also running only backend or only another service, but always the same thing happens and always with the backend.
What could be happening? Is it a configuration problem? Some error in Ingress or Deployment? Or is it just the backend service?
I would be very grateful for any help.
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
This one is saying that your backend service is not registered to the Ingress.
One thing to remember is that Ingress registers Services by pods' ClusterIP, like your Ingress output "10.0.1.30:80", not NodePort. And according to docs , I don't know why you can have multiple NodePort services with the same port. But when you do port-forward, you actually open that port on all your instances, I assume you have 2 instances, and then your ALB health check that port and return healthy.
But I think your issue is from your Ingress that can not locate your backend service.
My suggestions are:
Trying with only backend-service with port changed, and maybe without auth and frontend services. Default range of NodePort is 30000 - 32767
Going inside that pod or create a new pod, make a request to that service using its URL to check what it returns. By default, ALB only accept status 200 from its homepage.

Does anyone know how to deploy a Django project which has Postgres Database on minikube locally. any refrences i try all of them which i found in

here is My Deployment files
------
#Depployment Django
apiVersion: apps/v1
kind: Deployment
metadata:
name: django1
labels:
app: django1
spec:
replicas: 1
selector:
matchLabels:
app: django-container
template:
metadata:
labels:
app: django-container
spec:
containers:
- name: todo
image: jayantkeer/image-of-kubernets
command: ["python manage.py makemigrations", "python manage.py migrate","python manage.py"] # runs migrations and starts the server
ports:
- containerPort: 8000
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_HOST
value: postgres-service
And postgres service file
apiVersion: v1
kind: Service
metadata:
name: todo
labels:
app: todo
spec:
type: NodePort
selector:
app: django-container
ports:
- port: 8000
targetPort: 8000
-------
and postgres deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
tier: backend
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: DATABASE_PASS
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_DB
value: kubernetes_django
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-volume-mount
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pvc

Containers in pod won't talk to each other in Kubernetes

I have three containers in a pod: nginx, redis, custom django app. It seems like none of them talk to each other with kubernetes. In docker compose they do but I can't use docker compose in production.
The django container gets this error:
[2022-06-20 21:45:49,420: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/0: Error 111 connecting to redis:6379. Connection refused..
Trying again in 32.00 seconds... (16/100)
and the nginx container starts but never shows any traffic. Trying to connect to localhost:8000 gets no reply.
Any idea whats wrong with my yml file?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: djangonetwork
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
---
apiVersion: v1
data:
DB_HOST: db
DB_NAME: django_db
DB_PASSWORD: password
DB_PORT: "5432"
DB_USER: user
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: web
name: envs--django
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web
name: web
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.network/djangonetwork: "true"
io.kompose.service: web
spec:
containers:
- image: nginx:alpine
name: nginxcontainer
ports:
- containerPort: 8000
- image: redis:alpine
name: rediscontainer
ports:
- containerPort: 6379
resources: {}
- env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
key: DB_HOST
name: envs--django
- name: DB_NAME
valueFrom:
configMapKeyRef:
key: DB_NAME
name: envs--django
- name: DB_PASSWORD
valueFrom:
configMapKeyRef:
key: DB_PASSWORD
name: envs--django
- name: DB_PORT
valueFrom:
configMapKeyRef:
key: DB_PORT
name: envs--django
- name: DB_USER
valueFrom:
configMapKeyRef:
key: DB_USER
name: envs--django
image: localhost:5000/integration/web:latest
name: djangocontainer
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: web
You've put all three containers into a single Pod. That's usually not the preferred approach: it means you can't restart one of the containers without restarting all of them (any update to your application code requires discarding your Redis cache) and you can't individually scale the component parts (if you need five replicas of your application, do you also need five reverse proxies and can you usefully use five Redises?).
Instead, a preferred approach is to split these into three separate Deployments (or possibly use a StatefulSet for Redis with persistence). Each has a corresponding Service, and then those Service names can be used as DNS names.
A very minimal example for Redis could look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
service: web
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- name: redis
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis # <-- this name will be a DNS name
spec:
selector: # matches the template: { metadata: { labels: } }
service: web
component: redis
ports:
- name: redis
port: 6379
targetPort: redis # matches a containerPorts: [{ name: }]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
...
env:
- name: REDIS_HOST
value: redis # matches the Service
If all three parts are in the same Pod, then the Service can't really distinguish which part it's talking to. In principle, between these containers, they share a network namespace and need to talk to each other as localhost; the containers: [{ name: }] have no practical effect.

kubernetes django deployment with gunicorn

I am trying to deploy an empty image of alang/django using kubernetes on my minikube cluster.
This is my manifest file for deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: django
labels:
app: django
spec:
replicas: 2
selector:
matchLabels:
pod: django
template:
metadata:
labels:
pod: django
spec:
restartPolicy: "Always"
containers:
- name: django
image: alang/django
ports:
- containerPort: 8000
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgresql-db-configmap
key: pg-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-db-secret
key: pg-db-password
- name: POSTGRES_HOST
value: postgres-service
- name: REDIS_HOST
value: redis-service
- name: GUNICORN_CMD_ARGS
value: "--bind 0.0.0.0:8000"
but i am facing issues with deployment , i think with gunicorn, getting this back:
TypeError: the 'package' argument is required to perform a relative import for '.wsgi'
Any way please to deploy it correctly?

Helm charts nested loops

Trying to generate deployments for my helm charts by using this template
{{- range .Values.services }}
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ . }}
spec:
replicas: {{ .replicaCount }}
template:
metadata:
labels:
app: myapp-{{ . }}
chart: myapp-{{ $.Values.cluster }}-{{ $.Values.environment }}
spec:
containers:
- name: myapp-{{ . }}
image: {{ $.Values.containerRegistry }}/myapp-{{ . }}:latest
ports:
- containerPort: {{ .targetPort }}
env:
{{- with .environmentVariables }}
{{ indent 10 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
for 2 of my services. In values.yaml I got
environment: dev
cluster: sandbox
ingress:
enabled: true
containerRegistry: myapp.io
services:
- backend:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- web:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
... but the output is not being properly formatted
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-map[backend:map[replicaCount:1 targetPort:8080 environmentVariables:[map[name:SOME_VAR value:hello] port:80]]
instead of
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-web
(...)
and another config
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-backend
(...)
what functions can I use or some different data structure? None of the references (i.e. .environmentVariables are working correctly)
I think you should reconsider the way the data is structured, this would work better:
services:
- name: backend
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- name: web
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
And your Deployment to look like this:
{{- range .Values.services }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ .name }}
spec:
replicas: {{ .settings.replicaCount }}
template:
metadata:
labels:
app: myapp-{{ .name }}
spec:
containers:
- name: myapp-{{ .name }}
image: {{ $.Values.containerRegistry }}/myapp-{{ .name }}:latest
ports:
- containerPort: {{ .settings.targetPort }}
env:
{{- with .settings.environmentVariables }}
{{ toYaml . | trim | indent 6 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
would actually create two deployments, by adding the --- separator.