Whats the proper way to proxy a Cloud SQL Database into Bitbucket Pipelines?
I have a Google Cloud SQL Postgres Instance (And also tried a MySQL DB).
Opening all ports to connections allows bitbucket pipelines to properly deploy my Django based Google App Engine Project, based off this example pipeline - https://github.com/GoogleCloudPlatform/continuous-deployment-bitbucket/blob/master/bitbucket-pipelines.yml
However, when I try to limit the access to the Cloud SQL instances and use cloud_sql_proxy instead, I can properly deploy locally, but Bitbucket will always fail to find the SQL Server
My bitbucket-pipelines.yml looks something like this:
- export CLOUDSDK_CORE_DISABLE_PROMPTS=1
# Google Cloud SDK is pinned for build reliability. Bump if the SDK complains about deprecation.
- SDK_VERSION=127.0.0
- SDK_FILENAME=google-cloud-sdk-${SDK_VERSION}-linux-x86_64.tar.gz
- curl -O -J https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/${SDK_FILENAME}
- tar -zxvf ${SDK_FILENAME} --directory ${HOME}
- export PATH=${PATH}:${HOME}/google-cloud-sdk/bin
# Install Google App Engine SDK
- GAE_PYTHONPATH=${HOME}/google_appengine
- export PYTHONPATH=${PYTHONPATH}:${GAE_PYTHONPATH}
- python scripts/fetch_gae_sdk.py $(dirname "${GAE_PYTHONPATH}")
- echo "${PYTHONPATH}" && ls ${GAE_PYTHONPATH}
# Install app & dev dependencies, test, deploy, test deployment
- echo "key = '${GOOGLE_API_KEY}'" > api_key.py
- echo ${GOOGLE_CLIENT_SECRET} > client-secret.json
- gcloud auth activate-service-account --key-file client-secret.json
- wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
- chmod +x cloud_sql_proxy
- ./cloud_sql_proxy -instances=google-cloud-project-name:us-west1:google-cloud-sql-database-name=tcp:5432 &
- gcloud app deploy --no-promote --project google-cloud-project-name --quiet
At this point, I would expect to be able to access the SQL database, but it doesn't seem to be available, and my deployment fails to find a local proxy'ed database
Related
I am very confused regarding how to set and access API secrets in a Next.js app within an AWS Amplify project.
The scenario is: I have a private API key that fetches data from an API. Obviously, this is a secret key and I don't want to share it in my github repo or the browser. I create a .env.local file and place my secret there.
API_KEY="qwerty123"
I am able to access this key in my code through using process.env.API_KEY
Here is an example fetch request with that API Key: https://developer.nps.gov/api/v1/parks?${parkCode}&api_key=${process.env.API_KEY}
This works perfectly when I run yarn dev and yarn build -> yarn start
This is the message I get when I run yarn start
next start
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Loaded env from /Users/tmo/Desktop/Code/projects/visit-national-parks/.env.local
The env is loaded and able to be called on my local machine.
However,
When I push this code to github and start the Build process in AWS Amplify, the app builds, but the API fetch calls do not work. I get a ````500 Server Error`````
This is what I have done to try and solve this issue:
Added my API_KEY in the Environment variables tab in Amplify
2. Update my Build settings
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- API_KEY=${API_KEY} '#Added my API_KEY from the environment variables tab in Amplify`
- yarn run build
I am not sure what else to do. After building the app again, I still get 500 server error
Here is the live amplify app with the server error.
We're working on something similar right now. Our dev designed it so it reads an .env file.
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- echo API_KEY=$API_KEY >.env
- echo OTHERKEY=$OTHER_KEY >> .env
- yarn run build
We were able to pick it up and pass it to AWS' DynamoDB Client SDK.
Not sure if it's your call or not, but yarn can be fickle in our Amplify projects sometimes, so we usually resort to using npm if it starts acting up.
Google Cloud Run allows for using Cloud SQL. But what if you need Cloud SQL when building your container in Google Cloud Build? Is that possible?
Background
I have a Next.js project, that runs in a Container on Google Cloud Run. Pushing my code to Cloud Build (installing the stuff, generating static pages and putting everything in a Container) and deploying to Cloud Run works perfectly. 👌
Cloud SQL
But, I just added some functionality in which it also needs to some data from my PostgreSQL instance that runs on Google Cloud SQL. This data is used when building the project (generating the static pages).
Locally, on my machine, this works fine as the project can connect to my CloudSQL proxy. While running in CloudRun this should also work, as Cloud Run allows for connecting to my Postgres instance on Cloud SQL.
My problem
When building my project with Cloud Build, I need access to my database to be able to generate my static pages. I am looking for a way to connect my Docker cloud builder to Cloud SQL, perhaps just like Cloud Run (fully managed) provides a mechanism that connects using the Cloud SQL Proxy.
That way I could be connecting to /cloudsql/INSTANCE_CONNECTION_NAME while building my project!
Question
So my question is: How do I connect to my PostgreSQL instance on Google Cloud SQL via the Cloud SQL Proxy while building my project on Google Cloud Build?
Things like my database credentials, etc. already live in Secrets Manager, so I should be able to use those details I guess 🤔
You can use the container that you want (and you need) to generate your static pages, and download cloud sql proxy to open a tunnel with the database
- name: '<YOUR CONTAINER>'
entrypoint: 'sh'
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=<my-project-id:us-central1:myPostgresInstance>=tcp:5432 &
<YOUR SCRIPT>
App engine has an exec wrapper which has the benefit of proxying your Cloud SQL in for you, so I use that to connect to the DB in cloud build (so do some google tutorials).
However, be warned of trouble ahead: Cloud Build runs exclusively* in us-central1 which means it'll be pathologically slow to connect from anywhere else. For one or two operations, I don't care but if you're running a whole suite of integration tests that simply will not work.
Also, you'll need to grant permission for GCB to access GCSQL.
steps:
- id: 'Connect to DB using appengine wrapper to help'
name: gcr.io/google-appengine/exec-wrapper
args:
[
'-i', # The image you want to connect to the db from
'$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME:$SHORT_SHA',
'-s', # The postgres instance
'${PROJECT_ID}:${_POSTGRES_REGION}:${_POSTGRES_INSTANCE_NAME}',
'-e', # Get your secrets here...
'GCLOUD_ENV_SECRET_NAME=${_GCLOUD_ENV_SECRET_NAME}',
'--', # And then the command you want to run, in my case a database migration
'python',
'manage.py',
'migrate',
]
substitutions:
_GCLOUD_ENV_SECRET_NAME: mysecret
_GCR_HOSTNAME: eu.gcr.io
_POSTGRES_INSTANCE_NAME: my-instance
_POSTGRES_REGION: europe-west1
* unless you're willing to pay more and get very stung by Beta software, in which case you can use cloud build workers (at the time of writing are in Beta, anyway... I'll come back and update if they make it into production and fix the issues)
The ENV VARS (including DB connections) are not available during build steps.
However, you can use ENTRYPOINT (of Docker) to run commands when the container runs (after completing the build steps).
I was having the need to run DB migrations when a new build was deployed (i.e. when the container starts running) and using ENTRYPOINT (to a file/command) was able to run migrations (which require DB connection details, not available during the build-process).
"How to" part is pretty brief and is located here : https://stackoverflow.com/a/69088911/867451
I would like to run database migrations written in node.js during the Cloud Build process.
Currently, the database migration command is being executed but it seems that the Cloud Build process does not have access to connect to Cloud SQL via an IP address with username/password.
In the case with Cloud SQL and Node.js it would look something like this:
steps:
# Install Node.js dependencies
- id: yarn-install
name: gcr.io/cloud-builders/yarn
waitFor: ["-"]
# Install Cloud SQL proxy
- id: proxy-install
name: gcr.io/cloud-builders/yarn
entrypoint: sh
args:
- "-c"
- "wget https://storage.googleapis.com/cloudsql-proxy/v1.20.1/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy && chmod +x cloud_sql_proxy"
waitFor: ["-"]
# Migrate database schema to the latest version
# https://knexjs.org/#Migrations-CLI
- id: migrate
name: gcr.io/cloud-builders/yarn
entrypoint: sh
args:
- "-c"
- "(./cloud_sql_proxy -dir=/cloudsql -instances=<CLOUD_SQL_CONNECTION> & sleep 2) && yarn run knex migrate:latest"
timeout: "1200s"
waitFor: ["yarn-install", "proxy-install"]
timeout: "1200s"
You would launch yarn install and download Cloud SQL Proxy in parallel. Once these two steps are complete, you run launch the proxy, wait 2 seconds and finally run yarn run knex migrate:latest.
For this to work you would need Cloud SQL Admin API enabled in your GCP project.
Where <CLOUD_SQL_INSTANCE> is your Cloud SQL instance connection name that can be found here. The same name will be used in your SQL connection settings, e.g. host=/cloudsql/example:us-central1:pg13.
Also, make sure that the Cloud Build service account has "Cloud SQL Client" role in the GCP project, where the db instance is located.
As of tag 1.16 of gcr.io/cloudsql-docker/gce-proxy, the currently accepted answer no longer works. Here is a different approach that keeps the proxy in the same step as the commands that need it:
- id: cmd-with-proxy
name: [YOUR-CONTAINER-HERE]
timeout: 100s
entrypoint: sh
args:
- -c
- '(/workspace/cloud_sql_proxy -dir=/workspace -instances=[INSTANCE_CONNECTION_NAME] & sleep 2) && [YOUR-COMMAND-HERE]'
The proxy will automatically exit once the main process exits. Additionally, it'll mark the step as "ERROR" if either the proxy or the command given fails.
This does require the binary is in the /workspace volume, but this can be provided either manually or via a prereq step like this:
- id: proxy-install
name: alpine:3.10
entrypoint: sh
args:
- -c
- 'wget -O /workspace/cloud_sql_proxy https://storage.googleapis.com/cloudsql-proxy/v1.16/cloud_sql_proxy.linux.386 && chmod +x /workspace/cloud_sql_proxy'
Additionally, this should work with TCP since the proxy will be in the same container as the command.
Use google-appengine/exec-wrapper. It is an image to do exactly this. Usage (see README in link):
steps:
- name: "gcr.io/google-appengine/exec-wrapper"
args: ["-i", "gcr.io/my-project/appengine/some-long-name",
"-e", "ENV_VARIABLE_1=value1", "-e", "ENV_2=value2",
"-s", "my-project:us-central1:my_cloudsql_instance",
"--", "bundle", "exec", "rake", "db:migrate"]
The -s sets the proxy target.
Cloud Build runs using a service account and it looks like you need to grant access to Cloud SQL for this account.
You can find additional info about setting service account permissions here.
Here's how to combine Cloud Build + Cloud SQL Proxy + Docker.
If you're running your database migrations/operations within a Docker container in Cloud Build, it won't be able to directly access your proxy, because Docker containers are isolated from the host machine.
Here's what I managed to get up and running:
- id: build
# Build your application
waitFor: ['-']
- id: install-proxy
name: gcr.io/cloud-builders/wget
entrypoint: bash
args:
- -c
- wget -O /workspace/cloud_sql_proxy https://storage.googleapis.com/cloudsql-proxy/v1.15/cloud_sql_proxy.linux.386 && chmod +x /workspace/cloud_sql_proxy
waitFor: ['-']
- id: migrate
name: gcr.io/cloud-builders/docker
entrypoint: bash
args:
- -c
- |
/workspace/cloud_sql_proxy -dir=/workspace -instances=projectid:region:instanceid & sleep 2 && \
docker run -v /workspace:/root \
--env DATABASE_HOST=/root/projectid:region:instanceid \
# Pass other necessary env variables like db username/password, etc.
$_IMAGE_URL:$COMMIT_SHA
timeout: '1200s'
waitFor: [build, install-proxy]
Because our db operations are taking place within the Docker container, I found the best way to provide the access to Cloud SQL by specifying the Unix socket -dir/workspace instead of exposing a TCP port 5432.
Note: I recommend using the directory /workspace instead of /cloudsql for Cloud Build.
Then we mounted the /workspace directory to Docker container's /root directory, which is the default directory where your application code resides. When I tried to mount it to other than /root, nothing seemed to happen (perhaps a permission issue with no error output).
Also: I noticed the proxy version 1.15 works well. I had issues with newer versions. Your mileage may vary.
What I am trying to do is to enable Continuous delivery from GitLab to my compute engine on Google Cloude. I have Ubuntu 16.04 TSL running over there. I did install all components needed to run my project like: Swift, vapor, nginx.
I have manage to install Gitlab runner as well and created a runner whcihc is accessible from my gitlab repo. Everytime I do push on master the runner triggers. What happen is a failure due to:
could not create leading directories of '/home/gitlab-runner/builds/2bbbbbd/0/Server/Packages/vapor.git': Permission denied
If I change the permissions to chmod -R 777 It will hange on running for build stage visible on gitlab pipeline.
I did something like:
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/builds
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/cache
but this haven't help, the error is same Permission denied
Below you have my .gitlab-ci.yml
before_script:
- swift --version
stages:
- build
- deploy
job_build:
stage: build
before_script:
- vapor clean
script:
- vapor build --release
only:
- master
job_run_app:
stage: deploy
script:
- echo "Deploy a API"
- vapor run --name=App --env=production
environment:
name: production
job_run_frontend:
stage: deploy
script:
- echo "Deploy a Frontend"
- vapor run --name=Frontend --env=production
environment:
name: production
But that haven't pass to next stage eg. deploy. I had waited more then 14h for that but with out result.
And... I have few more questions:
Gitlab runner creates builds under location /home/gitlab-runner/builds/ in this location every new job have own folder. for eg. /home/gitlab-runner/builds/2bbbbbd/ in which is my project and the commands are executed. So what happens when the first one is running and I do deploy new version? the ports are blocked by the first instance and so on?
If I want to enable supervisor how do I do that with this when every time I deploy folder is different?
Can anyone explain or show me or point me to tutorial how do Continuous deployment with out docker?
How to start a service using GitLab runner
Thanks to long deep search I finally found an answer! The full article can be found above.
Briefly GitLab CI documentation recommends using dpl for deployment. Gitlab runner run test and process should end. The runner is designed to kill all created processes after finishing each build. The GitLab runner is unable to perform operations outside the catalogue.
I have a Django project that I deploy on a server using CircleCI. The server is a basic cloud server, and I can SSH into it.
I set up the deployment section of my circle.yml file, and everything is working fine. I would like to automatically perform some actions on the server after the deployment (such as migrating the database or reloading gunicorn).
I there a way to do that with CircleCI? I looked in the docs but couldn't find anything related to this particular problem. I also tried to put ssh user#my_server_ip after my deployment step, but then I get stuck and cannot perform any action. I can successfully SSH in, but the rest of the commands is not called.
Here is what my ideal circle.yml file would look like:
deployment:
staging:
branch: develop
commands:
- rsync --update ./requirements.txt user#server:/home/user/requirements.txt
- rsync -r --update ./myapp/ user#server:/home/user/myapp/
- ssh user#server
- workon myapp_venv
- cd /home/user/
- pip install -r requirements.txt
I solved the problem by putting a post_deploy.sh file on the server, and putting this line on the circle.yml:
ssh -i ~/.ssh/id_myhost user#server 'post_deploy.sh'
It executes the instructions in the post_deploy.sh file, which is exactly what I wanted.