Connecting to Google cloud mysql instance from GKE cluster using cloudsqlproxy - google-cloud-platform

I have two projects in GCP viz. project1 and project2
I have setup mysql instance in project1.
I have also setup cloudsqlproxy (pod) and mypod in a GKE cluster in project2.
I want to access mysql instance frommypodthroughcloudsqlproxy`.
I have the following code for cloudmysqlproxy
apiVersion: v1
kind: Service
metadata:
name: cloudsqlproxy-service-mainnet
namespace: dev
spec:
ports:
- port: 3306
targetPort: port-mainnet
selector:
app: cloudsqlproxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudsqlproxy
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: cloudsqlproxy
template:
metadata:
labels:
app: cloudsqlproxy
spec:
containers:
# Make sure to specify image tag in production
# Check out the newest version in release page
# https://github.com/GoogleCloudPlatform/cloudsql-proxy/releases
- name: cloudsqlproxy
image: b.gcr.io/cloudsql-docker/gce-proxy:latest
# 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
imagePullPolicy: Always
name: cloudsqlproxy
command:
- /cloud_sql_proxy
- -dir=/cloudsql
- -instances=project1:asia-east1:development=tcp:0.0.0.0:3306
- -credential_file=/credentials/credentials.json
ports:
- name: port-mainnet
containerPort: 3306
volumeMounts:
- mountPath: /cloudsql
name: cloudsql
- mountPath: /credentials
name: cloud-sql-client-account-token
volumes:
- name: cloudsql
emptyDir:
- name: cloud-sql-client-account-token
secret:
secretName: cloud-sql-client-account-token
I have setup cloud-sql-client-account-token in the following manner:
kubectl create secret cloud-sql-client-account-token --from-file=credentials.json=$HOME/credentials.json
Where I downloaded the credentials.json file from a service account in project1.
When I try to access the mysql instance from my pod, I get the folløwing error:
mysql --host=cloudsqlproxy-service-mainnet.dev.svc.cluster.local --port=3306
ERROR 1045 (28000): Access denied for user 'root'#'cloudsqlproxy~35.187.201.86' (using password: NO)
And the cloud-proxy logs, I get the following:
2018/11/25 00:35:31 Instance project1:asia-east1:development closed connection
Is it necessary to launch a mysql instance in the same project (project2) as the pod? What am I missing?
EDIT
I can access the proxy on my local machine by setting up like this
/cloud_sql_proxy -instances=project1:asia-east1:development=tcp:3306
and then connecting to the proxy using a mysql client.

Related

Google Kubernetes Engine & Github actions deploy deployments.apps "gke-deployment" not found

I've been trying to run Google Kubernetes Engine deploy action for my github repo.
I have made a github workflow job run and everything works just fine except the deploy step.
Here is my error code:
Error from server (NotFound): deployments.apps "gke-deployment" not found
I'm assuming my yaml files are at fault, I'm fairly new to this so I got these from the internet and just edited a bit to fit my code, but I don't know the details.
Kustomize.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: arbitrary
# Example configuration for the webserver
# at https://github.com/monopole/hello
commonLabels:
app: videoo-render
resources:
- deployment.yaml
- service.yaml
deployment.yaml (I think the error is here):
apiVersion: apps/v1
kind: Deployment
metadata:
name: the-deployment
spec:
replicas: 3
selector:
matchLabels:
deployment: video-render
template:
metadata:
labels:
deployment: video-render
spec:
containers:
- name: the-container
image: monopole/hello:1
command: ["/video-render",
"--port=8080",
"--enableRiskyFeature=$(ENABLE_RISKY)"]
ports:
- containerPort: 8080
env:
- name: ALT_GREETING
valueFrom:
configMapKeyRef:
name: the-map
key: altGreeting
- name: ENABLE_RISKY
valueFrom:
configMapKeyRef:
name: the-map
key: enableRisky
service.yaml:
kind: Service
apiVersion: v1
metadata:
name: the-service
spec:
selector:
deployment: video-render
type: LoadBalancer
ports:
- protocol: TCP
port: 8666
targetPort: 8080
Using ubuntu 20.04 image, repo is C++ code.
For anyone wondering why this happens:
You have to edit this line to an existing deployment:
DEPLOYMENT_NAME: gke-deployment # TODO: update to deployment name,
to:
DEPLOYMENT_NAME: existing-deployment-name

Quorum nodes run fine with docker-compose, but crash when deployed on kubernetes

Following the steps outlined here, I created a basic Quorum network with 4 nodes and IBFT consensus. I then created a docker image for each of the nodes, copying the contents of each node's directory on to the image. The image was created from the official quorumengineering/quorum image, and when started as a container it executes the geth command. An example Dockerfile follows (different nodes have different rpcports/ports):
FROM quorumengineering/quorum
WORKDIR /opt/node
COPY . /opt/node
ENTRYPOINT []
CMD PRIVATE_CONFIG=ignore nohup geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,istanbul --rpcvhosts="*" --emitcheckpoints --port 30304
I then made a docker-compose file to run the images.
version: '2'
volumes:
qnode0-data:
qnode1-data:
qnode2-data:
qnode3-data:
services:
qnode0:
container_name: qnode0
image: <myDockerHub>/qnode0
ports:
- 22000:22000
- 30303:30303
volumes:
- qnode0-data:/opt/node
qnode1:
container_name: qnode1
image: <myDockerHub>/qnode1
ports:
- 22001:22001
- 30304:30304
volumes:
- qnode1-data:/opt/node
qnode2:
container_name: qnode2
image: <myDockerHub>/qnode2
ports:
- 22002:22002
- 30305:30305
volumes:
- qnode2-data:/opt/node
qnode3:
container_name: qnode3
image: <myDockerHub>/qnode3
ports:
- 22003:22003
- 30306:30306
volumes:
- qnode3-data:/opt/node
When running these images locally with docker-compose, the nodes start and I can even see the created blocks via a blockchain explorer. However, when I try to run this in a kubernetes cluster, either locally with minikube, or on AWS, the nodes do not start but rather crash.
To deploy on kubernetes I made the following three yaml files for each node (12 files in total):
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: qnode0
name: qnode0
spec:
replicas: 1
selector:
matchLabels:
app: qnode0
strategy:
type: Recreate
template:
metadata:
labels:
app: qnode0
spec:
containers:
- image: <myDockerHub>/qnode0
imagePullPolicy: ""
name: qnode0
ports:
- containerPort: 22000
- containerPort: 30303
resources: {}
volumeMounts:
- mountPath: /opt/node
name: qnode0-data
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: qnode0-data
persistentVolumeClaim:
claimName: qnode0-data
status: {}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: qnode0-service
spec:
selector:
app: qnode0
ports:
- name: rpcport
protocol: TCP
port: 22000
targetPort: 22000
- name: netlistenport
protocol: TCP
port: 30303
targetPort: 30303
persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: qnode0-data
name: qnode0-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
When trying to run on a kubernetes cluster, each node runs into this error:
ERROR[] Cannot start mining without etherbase err="etherbase must be explicitly specified"
Fatal: Failed to start mining: etherbase missing: etherbase must be explicitly specified
which does not occur when running locally with docker-compose. After examining the logs, I saw there is a difference between how the nodes startup locally with docker-compose and on kubernetes, which is the following lines:
locally I see the following lines in each node's output:
INFO [] Initialising Ethereum protocol name=istanbul versions="[99 64]" network=10 dbversion=7
...
DEBUG[] InProc registered namespace=istanbul
on kubernetes (either in minikube or AWS) I see these lines differently:
INFO [] Initialising Ethereum protocol name=eth versions="[64 63]" network=10 dbversion=7
...
DEBUG[] IPC registered namespace=eth
DEBUG[] IPC registered namespace=ethash
Why is this happening? What is the significance of name=istanbul/eth? My common sense logic says that the error happens because of the use of name=eth, instead of name=istanbul. But I don't know the significance of this, and more importantly, I don't know what it is I did to inadvertently affect the kubernetes deployment.
Any ideas how to fix this?
EDIT
I tried to address what David Maze mentioned in his comment, i.e. that the node directory gets overwritten, so I created a new directory in the image with
RUN mkdir /opt/nodedata/
and used that to mount volumes in kubernetes. I also used StatefulSets instead of Deployments in kubernetes. The relevant yaml follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: qnode0
spec:
serviceName: qnode0
replicas: 1
selector:
matchLabels:
app: qnode0
template:
metadata:
labels:
app: qnode0
spec:
containers:
- image: <myDockerHub>/qnode0
imagePullPolicy: ""
name: qnode0
ports:
- protocol: TCP
containerPort: 22000
- protocol: TCP
containerPort: 30303
volumeMounts:
- mountPath: /opt/nodedata
name: qnode0-data
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: qnode0-data
persistentVolumeClaim:
claimName: qnode0-data
Changing the volume mount immediately produced the correct behaviour of
INFO [] Initialising Ethereum protocol name=istanbul
However, I had networking issues, which I solved by using the environment variables that kubernetes sets for each service, which include the IP each service is running at, e.g.:
QNODE0_PORT_30303_TCP_ADDR=172.20.115.164
I also changed my kubernetes services a little, as follows:
apiVersion: v1
kind: Service
metadata:
labels:
app: qnode0
name: qnode0
spec:
ports:
- name: "22000"
port: 22000
targetPort: 22000
- name: "30303"
port: 30303
targetPort: 30303
selector:
app: qnode0
Using the environment variables to properly initialise the quorum files solved the networking problem.
However, when I delete my stateful sets and my services with:
kubectl delete -f <my_statefulset_and_service_yamls>
and then apply them again:
kubectl apply -f <my_statefulset_and_service_yamls>
quorum starts from scratch, i.e. it does not continue block creation from where it stopped but starts from 1 again, as follows:
Inserted new block number=1 hash=1c99d0…fe59bb
So the state of the blockchain is not saved, as was my initial fear. What should I do to address this?

Postgres+django on kubernetes: django.db.utils.OperationalError: could not connect to server: Connection timed out

My Django app can't connect to the Postgres server on Kubernetes. All other pods are able to connect to this Postgres server and creds are valid
as well, any idea why not this Django app
django.db.utils.OperationalError: could not connect to server:
Connection timed out
Is the server running on host "postgres-postgresql" (10.245.56.118) and accepting
TCP/IP connections on port 5342?
Picture of the problem. I login into the django app container and tried to connect via Django's data access layer and psql. Only psql is working without any problems
postgres:
https://github.com/cetic/helm-postgresql
Kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
githubdir.service: valnet
name: valnet
spec:
selector:
matchLabels:
app: valnet
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
app: valnet
spec:
containers:
- args:
env:
- name: VALNET_DATABASE_USER
value: "postgres"
- name: VALNET_DATABASE_PASSWORD
value: "gdrBP9xxDZ"
- name: VALNET_DATABASE_HOST
value: "postgres-postgresql"
- name: VALNET_DATABASE_PORT
value: "5342"
image: donutloop/valnet:v0.3.0
name: valnet
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
That chart configured Postgres to listen on port 5432. You tried to connect to port 5342. Those are different.

Kubernetes Django Restframework on GKE

I've tried to get access via external ip from the load balancer. But I can't connect with my browser through that ip.
I've got a connection refused on port 80. I don't know if my yaml file is incorrect or its a config on my load balancer
I builded my docker image successfully with my requirements.txt and load it to the bucket from GKE to pull the docker image into Kubernetes.
I deployed my image with the command:
kubectl create -f <filename>.yaml
with following yaml file:
# [START kubernetes_deployment]
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: example
labels:
app: example
spec:
replicas: 1
template:
metadata:
labels:
app: example
spec:
containers:
- name: faxi
image: gcr.io/<my_id_stands_here>/<bucketname>
command: ["python3", "manage.py", "runserver"]
env:
# [START cloudsql_secrets]
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: cloudsql
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql
key: password
# [END cloudsql_secrets]
ports:
- containerPort: 8080
# [START proxy_container]
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=<I put here my instance name inside>",
"-credential_file=<my_credential_file"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
# [END volumes]
# [END kubernetes_deployment]
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: example
labels:
app: example
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: example
# [END service]
It works fine
I've got the Output below with kubectl get pods
NAME READY STATUS RESTARTS AGE
mypodname. 2/2 Running 0 21h
and with kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example LoadBalancer 10.xx.xxx.xxx 34.xx.xxx.xxx 80:30833/TCP 21h
kubernetes ClusterIP 10.xx.xxx.x <none> 443/TCP 23h
kubectl describe services example
gives me following output
Name: example
Namespace: default
Labels: app=example
Annotations: <none>
Selector: app=example
Type: LoadBalancer
IP: 10.xx.xxx.xxx
LoadBalancer Ingress: 34.xx.xxx.xxx
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30833/TCP
Endpoints: 10.xx.x.xx:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
But I can't connect to my rest api via curl or browser
I try to connect to <external_ip>:80and get a connection refused on this port
I scanned the external ip with Nmap and it shows that port 80 is closed. But Im not sure If thats the reason.
My nmap output:
PORT STATE SERVICE
80/tcp closed http
554/tcp open rtsp
7070/tcp open realserver
Thank you for your help guys
Please create an ingress firewall rule to allow traffic via port 30833 for all the nodes, as it should resolve the issue.

Failed to connect to cloud SQL using proxy despite following the documentation

I'm using doctrine ORM with Symfony, the PHP framework. I'm getting bizarre behaviour when trying to connect to cloud SQL using GKE.
I'm able to get a connection to the DB via doctrine on command line, for example php bin/console doctrine:database:create is successful and I can see a connection opened in the proxy pod logs.
But when I try and connect to the DB via doctrine in my application I run into this error without fail:
An exception occurred in driver: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known
I have been trying to get my head around this but it doesn't make sense, why would I be able to connect via command line but not in my application?
I followed the documentation here for setting up a db connection using cloud proxy. This is my Kubernetes deployment:
---
apiVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
name: "riptides-api"
namespace: "default"
labels:
app: "riptides-api"
microservice: "riptides"
spec:
replicas: 3
selector:
matchLabels:
app: "riptides-api"
microservice: "riptides"
template:
metadata:
labels:
app: "riptides-api"
microservice: "riptides"
spec:
containers:
- name: "api-sha256"
image: "eu.gcr.io/riptides/api#sha256:ce0ead9d1dd04d7bfc129998eca6efb58cb779f4f3e41dcc3681c9aac1156867"
env:
- name: DB_HOST
value: 127.0.0.1:3306
- name: DB_USER
valueFrom:
secretKeyRef:
name: riptides-mysql-user-skye
key: user
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: riptides-mysql-user-skye
key: password
- name: DB_NAME
value: riptides
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "php bin/console doctrine:migrations:migrate -n"]
volumeMounts:
- name: keys
mountPath: "/app/config/jwt"
readOnly: true
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=riptides:europe-west4:riptides-sql=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: riptides-mysql-service-account
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: keys
secret:
secretName: riptides-api-keys
items:
- key: private.pem
path: private.pem
- key: public.pem
path: public.pem
- name: riptides-mysql-service-account
secret:
secretName: riptides-mysql-service-account
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "riptides-api-hpa"
namespace: "default"
labels:
app: "riptides-api"
microservice: "riptides"
spec:
scaleTargetRef:
kind: "Deployment"
name: "riptides-api"
apiVersion: "apps/v1beta1"
minReplicas: 1
maxReplicas: 5
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 70
If anyone has any suggestions I'd be forever greatful
It doesn't look like anything is wrong with your k8s yaml, but more likely in how you are connecting using Symfony. According to the documentation here, Symfony expect the DB URI to be passed in through an environment variable called "DATABASE_URL". See the following example:
# customize this line!
DATABASE_URL="postgres://db_user:db_password#127.0.0.1:5432/db_name"
This was happening because doctrine was using default values instead of the (should be overriding) environment variables I had set up in my deployment. I changed the environment variable names to be different to the default ones and it works