I'm using doctrine ORM with Symfony, the PHP framework. I'm getting bizarre behaviour when trying to connect to cloud SQL using GKE.
I'm able to get a connection to the DB via doctrine on command line, for example php bin/console doctrine:database:create is successful and I can see a connection opened in the proxy pod logs.
But when I try and connect to the DB via doctrine in my application I run into this error without fail:
An exception occurred in driver: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known
I have been trying to get my head around this but it doesn't make sense, why would I be able to connect via command line but not in my application?
I followed the documentation here for setting up a db connection using cloud proxy. This is my Kubernetes deployment:
---
apiVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
name: "riptides-api"
namespace: "default"
labels:
app: "riptides-api"
microservice: "riptides"
spec:
replicas: 3
selector:
matchLabels:
app: "riptides-api"
microservice: "riptides"
template:
metadata:
labels:
app: "riptides-api"
microservice: "riptides"
spec:
containers:
- name: "api-sha256"
image: "eu.gcr.io/riptides/api#sha256:ce0ead9d1dd04d7bfc129998eca6efb58cb779f4f3e41dcc3681c9aac1156867"
env:
- name: DB_HOST
value: 127.0.0.1:3306
- name: DB_USER
valueFrom:
secretKeyRef:
name: riptides-mysql-user-skye
key: user
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: riptides-mysql-user-skye
key: password
- name: DB_NAME
value: riptides
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "php bin/console doctrine:migrations:migrate -n"]
volumeMounts:
- name: keys
mountPath: "/app/config/jwt"
readOnly: true
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=riptides:europe-west4:riptides-sql=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: riptides-mysql-service-account
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: keys
secret:
secretName: riptides-api-keys
items:
- key: private.pem
path: private.pem
- key: public.pem
path: public.pem
- name: riptides-mysql-service-account
secret:
secretName: riptides-mysql-service-account
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "riptides-api-hpa"
namespace: "default"
labels:
app: "riptides-api"
microservice: "riptides"
spec:
scaleTargetRef:
kind: "Deployment"
name: "riptides-api"
apiVersion: "apps/v1beta1"
minReplicas: 1
maxReplicas: 5
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 70
If anyone has any suggestions I'd be forever greatful
It doesn't look like anything is wrong with your k8s yaml, but more likely in how you are connecting using Symfony. According to the documentation here, Symfony expect the DB URI to be passed in through an environment variable called "DATABASE_URL". See the following example:
# customize this line!
DATABASE_URL="postgres://db_user:db_password#127.0.0.1:5432/db_name"
This was happening because doctrine was using default values instead of the (should be overriding) environment variables I had set up in my deployment. I changed the environment variable names to be different to the default ones and it works
Related
I got this deployment from the internet and tested it with great result. My question is, are there config parameters that i can use to pass a role ARN instead of access key and secret key? I tried passing a role ARN in various forms inside aws-credentials. But it was to no avail.
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: cwagent-prometheus
namespace: amazon-cloudwatch
spec:
replicas: 1
selector:
matchLabels:
app: cwagent-prometheus
template:
metadata:
labels:
app: cwagent-prometheus
spec:
containers:
- name: cloudwatch-agent
image: amazon/cloudwatch-agent:1.247348.0b251302
imagePullPolicy: Always
env:
- name: CI_VERSION
value: "k8s/1.3.8"
volumeMounts:
- name: prometheus-cwagentconfig
mountPath: /etc/cwagentconfig
- name: prometheus-config
mountPath: /etc/prometheusconfig
- name: aws-credentials
mountPath: /root/.aws
volumes:
- name: prometheus-cwagentconfig
configMap:
name: prometheus-cwagentconfig
- name: prometheus-config
configMap:
name: prometheus-config
- name: aws-credentials
secret:
secretName: aws-credentials
serviceAccountName: cwagent-prometheus
The typical working solution is to provide aws-credentials with the format:
[AmazonCloudWatchAgent]
aws_access_key_id = $AWS_ID
aws_secret_access_key = $AWS_KEY
For instance, I tried changing it to:
[AmazonCloudWatchAgent]
role_arn = $ROLE_ARN
With this solution, the cloudwatch agent will complain about not finding aws_access_key_id in the credentials.
This is a know issue and still not resolved yet.
Use IAM Roles for Service Accounts issue on amazon-cloudwatch-agent
aws-helm-eks-charts issue
I have three containers in a pod: nginx, redis, custom django app. It seems like none of them talk to each other with kubernetes. In docker compose they do but I can't use docker compose in production.
The django container gets this error:
[2022-06-20 21:45:49,420: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/0: Error 111 connecting to redis:6379. Connection refused..
Trying again in 32.00 seconds... (16/100)
and the nginx container starts but never shows any traffic. Trying to connect to localhost:8000 gets no reply.
Any idea whats wrong with my yml file?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: djangonetwork
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
---
apiVersion: v1
data:
DB_HOST: db
DB_NAME: django_db
DB_PASSWORD: password
DB_PORT: "5432"
DB_USER: user
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: web
name: envs--django
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web
name: web
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.network/djangonetwork: "true"
io.kompose.service: web
spec:
containers:
- image: nginx:alpine
name: nginxcontainer
ports:
- containerPort: 8000
- image: redis:alpine
name: rediscontainer
ports:
- containerPort: 6379
resources: {}
- env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
key: DB_HOST
name: envs--django
- name: DB_NAME
valueFrom:
configMapKeyRef:
key: DB_NAME
name: envs--django
- name: DB_PASSWORD
valueFrom:
configMapKeyRef:
key: DB_PASSWORD
name: envs--django
- name: DB_PORT
valueFrom:
configMapKeyRef:
key: DB_PORT
name: envs--django
- name: DB_USER
valueFrom:
configMapKeyRef:
key: DB_USER
name: envs--django
image: localhost:5000/integration/web:latest
name: djangocontainer
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: web
You've put all three containers into a single Pod. That's usually not the preferred approach: it means you can't restart one of the containers without restarting all of them (any update to your application code requires discarding your Redis cache) and you can't individually scale the component parts (if you need five replicas of your application, do you also need five reverse proxies and can you usefully use five Redises?).
Instead, a preferred approach is to split these into three separate Deployments (or possibly use a StatefulSet for Redis with persistence). Each has a corresponding Service, and then those Service names can be used as DNS names.
A very minimal example for Redis could look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
service: web
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- name: redis
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis # <-- this name will be a DNS name
spec:
selector: # matches the template: { metadata: { labels: } }
service: web
component: redis
ports:
- name: redis
port: 6379
targetPort: redis # matches a containerPorts: [{ name: }]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
...
env:
- name: REDIS_HOST
value: redis # matches the Service
If all three parts are in the same Pod, then the Service can't really distinguish which part it's talking to. In principle, between these containers, they share a network namespace and need to talk to each other as localhost; the containers: [{ name: }] have no practical effect.
I am trying to deploy an empty image of alang/django using kubernetes on my minikube cluster.
This is my manifest file for deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: django
labels:
app: django
spec:
replicas: 2
selector:
matchLabels:
pod: django
template:
metadata:
labels:
pod: django
spec:
restartPolicy: "Always"
containers:
- name: django
image: alang/django
ports:
- containerPort: 8000
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgresql-db-configmap
key: pg-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-db-secret
key: pg-db-password
- name: POSTGRES_HOST
value: postgres-service
- name: REDIS_HOST
value: redis-service
- name: GUNICORN_CMD_ARGS
value: "--bind 0.0.0.0:8000"
but i am facing issues with deployment , i think with gunicorn, getting this back:
TypeError: the 'package' argument is required to perform a relative import for '.wsgi'
Any way please to deploy it correctly?
I have two projects in GCP viz. project1 and project2
I have setup mysql instance in project1.
I have also setup cloudsqlproxy (pod) and mypod in a GKE cluster in project2.
I want to access mysql instance frommypodthroughcloudsqlproxy`.
I have the following code for cloudmysqlproxy
apiVersion: v1
kind: Service
metadata:
name: cloudsqlproxy-service-mainnet
namespace: dev
spec:
ports:
- port: 3306
targetPort: port-mainnet
selector:
app: cloudsqlproxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudsqlproxy
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: cloudsqlproxy
template:
metadata:
labels:
app: cloudsqlproxy
spec:
containers:
# Make sure to specify image tag in production
# Check out the newest version in release page
# https://github.com/GoogleCloudPlatform/cloudsql-proxy/releases
- name: cloudsqlproxy
image: b.gcr.io/cloudsql-docker/gce-proxy:latest
# 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
imagePullPolicy: Always
name: cloudsqlproxy
command:
- /cloud_sql_proxy
- -dir=/cloudsql
- -instances=project1:asia-east1:development=tcp:0.0.0.0:3306
- -credential_file=/credentials/credentials.json
ports:
- name: port-mainnet
containerPort: 3306
volumeMounts:
- mountPath: /cloudsql
name: cloudsql
- mountPath: /credentials
name: cloud-sql-client-account-token
volumes:
- name: cloudsql
emptyDir:
- name: cloud-sql-client-account-token
secret:
secretName: cloud-sql-client-account-token
I have setup cloud-sql-client-account-token in the following manner:
kubectl create secret cloud-sql-client-account-token --from-file=credentials.json=$HOME/credentials.json
Where I downloaded the credentials.json file from a service account in project1.
When I try to access the mysql instance from my pod, I get the folløwing error:
mysql --host=cloudsqlproxy-service-mainnet.dev.svc.cluster.local --port=3306
ERROR 1045 (28000): Access denied for user 'root'#'cloudsqlproxy~35.187.201.86' (using password: NO)
And the cloud-proxy logs, I get the following:
2018/11/25 00:35:31 Instance project1:asia-east1:development closed connection
Is it necessary to launch a mysql instance in the same project (project2) as the pod? What am I missing?
EDIT
I can access the proxy on my local machine by setting up like this
/cloud_sql_proxy -instances=project1:asia-east1:development=tcp:3306
and then connecting to the proxy using a mysql client.
my yaml file:
kind: ReplicationController
apiVersion: v1
metadata:
name: locust-master
labels:
name: locust
role: master
spec:
replicas: 1
selector:
name: locust
role: master
template:
metadata:
labels:
name: locust
role: master
spec:
containers:
- name: locust
image: gcr.io/MY_PROJECT/locust-tasks:latest
env:
- name: LOCUST_MODE
key: LOCUST_MODE
value: master
- name: TARGET_HOST
key: TARGET_HOST
value: http://MY_WEBSITE.io
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5558
protocol: TCP
running kubectl create -f locust-master-controller.yaml
gives:
error: error validating "locust-master-controller.yaml": error validating data: [found invalid field key for v1.EnvVar, found invalid field key for v1.EnvVar]; if you choose to ignore these errors, turn validation off with --validate=false
I am basically following the instructions word for word on:
https://github.com/GoogleCloudPlatform/distributed-load-testing-using-kubernetes
Just delete these two lines:
key: LOCUST_MODE
and
key: TARGET_HOST
.
There is no key called key in the env section. Complete documentation for env is here..