User Google Cloud credentials inside ephemeral container? - google-cloud-platform

We use Docker containers for most of our work, including development on our own machines. These are ephemeral (started each time we run a test, for example).
For AWS, the auth is easy - we have our keys in our environment, and those are passed through to the container.
We're starting to use Google Cloud services, and the auth path seems harder than AWS. When doing local development, gcloud auth login works well. But when working in an ephemeral container, the login process would be needed each time, and I haven't found a way of persisting user credentials using either a) environment variables or b) mapping volumes - which are the two ways of passing data to containers.
From what I can read, the only path is to use service accounts. But I think then everyone needs their own service account, and needs to be constantly updating that account's permissions to be aligned with their own.
Is there a better way?

The easiest for making a local container see the gcloud credentials might be mapping the file system location of the application default credentials into the container.
First, do
gcloud auth application-default login
Then, run your container as
docker run -ti -v=$HOME/.config/gcloud:/root/.config/gcloud test
This should work. I tried it with a Dockerfile like
FROM node:4
RUN npm install --save #google-cloud/storage
ADD test.js .
CMD node ./test.js
and the test.js file like
var storage = require('#google-cloud/storage');
var gcs = storage({
projectId: 'my-project-515',
});
var bucket = gcs.bucket('my-bucket');
bucket.getFiles(function(err, files) {
if (err) {
console.log("failed to get files: ", err)
} else {
for (var i in files) {
console.log("file: ", files[i].name)
}
}
})
and it worked as expected.

I had the same issue, but I was using docker-compose. This was solved with adding following to docker-compose.yml:
volumes:
- $HOME/.config/gcloud:/root/.config/gcloud

Related

Connect Google Cloud Build to Google Cloud SQL

Google Cloud Run allows for using Cloud SQL. But what if you need Cloud SQL when building your container in Google Cloud Build? Is that possible?
Background
I have a Next.js project, that runs in a Container on Google Cloud Run. Pushing my code to Cloud Build (installing the stuff, generating static pages and putting everything in a Container) and deploying to Cloud Run works perfectly. 👌
Cloud SQL
But, I just added some functionality in which it also needs to some data from my PostgreSQL instance that runs on Google Cloud SQL. This data is used when building the project (generating the static pages).
Locally, on my machine, this works fine as the project can connect to my CloudSQL proxy. While running in CloudRun this should also work, as Cloud Run allows for connecting to my Postgres instance on Cloud SQL.
My problem
When building my project with Cloud Build, I need access to my database to be able to generate my static pages. I am looking for a way to connect my Docker cloud builder to Cloud SQL, perhaps just like Cloud Run (fully managed) provides a mechanism that connects using the Cloud SQL Proxy.
That way I could be connecting to /cloudsql/INSTANCE_CONNECTION_NAME while building my project!
Question
So my question is: How do I connect to my PostgreSQL instance on Google Cloud SQL via the Cloud SQL Proxy while building my project on Google Cloud Build?
Things like my database credentials, etc. already live in Secrets Manager, so I should be able to use those details I guess 🤔
You can use the container that you want (and you need) to generate your static pages, and download cloud sql proxy to open a tunnel with the database
- name: '<YOUR CONTAINER>'
entrypoint: 'sh'
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=<my-project-id:us-central1:myPostgresInstance>=tcp:5432 &
<YOUR SCRIPT>
App engine has an exec wrapper which has the benefit of proxying your Cloud SQL in for you, so I use that to connect to the DB in cloud build (so do some google tutorials).
However, be warned of trouble ahead: Cloud Build runs exclusively* in us-central1 which means it'll be pathologically slow to connect from anywhere else. For one or two operations, I don't care but if you're running a whole suite of integration tests that simply will not work.
Also, you'll need to grant permission for GCB to access GCSQL.
steps:
- id: 'Connect to DB using appengine wrapper to help'
name: gcr.io/google-appengine/exec-wrapper
args:
[
'-i', # The image you want to connect to the db from
'$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME:$SHORT_SHA',
'-s', # The postgres instance
'${PROJECT_ID}:${_POSTGRES_REGION}:${_POSTGRES_INSTANCE_NAME}',
'-e', # Get your secrets here...
'GCLOUD_ENV_SECRET_NAME=${_GCLOUD_ENV_SECRET_NAME}',
'--', # And then the command you want to run, in my case a database migration
'python',
'manage.py',
'migrate',
]
substitutions:
_GCLOUD_ENV_SECRET_NAME: mysecret
_GCR_HOSTNAME: eu.gcr.io
_POSTGRES_INSTANCE_NAME: my-instance
_POSTGRES_REGION: europe-west1
* unless you're willing to pay more and get very stung by Beta software, in which case you can use cloud build workers (at the time of writing are in Beta, anyway... I'll come back and update if they make it into production and fix the issues)
The ENV VARS (including DB connections) are not available during build steps.
However, you can use ENTRYPOINT (of Docker) to run commands when the container runs (after completing the build steps).
I was having the need to run DB migrations when a new build was deployed (i.e. when the container starts running) and using ENTRYPOINT (to a file/command) was able to run migrations (which require DB connection details, not available during the build-process).
"How to" part is pretty brief and is located here : https://stackoverflow.com/a/69088911/867451

Next JS serverless deployment on AWS ECS/Fargate: environment variable issue

so my goal is to deploy a serverless Dockerized NextJS application on ECS/Fargate.
So when I docker build my project using the command docker build . -f development.Dockerfile --no-cache -t myapp:latest everything is running successfully except Docker build doesn't consider the env file in my project's root directory. Once build finishes, I push the Docker image to Elastic Container Repository(ECR) and my Elastic Container Service(ECS) references that ECR.
So naturally, my built image doesn't have a ENV file(contains the API keys and DB credentials), and as a result my app is deployed but all of the services relying on those credentials are failing because there isn't an ENV file in my container and all of the variables become undefined or null.
To fix this issue I looked at this AWS doc and implemented a solution that stores my .env file in AWS S3 and that S3 ARN gets refrenced in the container service where the .env file is stored. However, that didn't workout and I think it's because of the way I'm setting my
next.config.js to reference my environmental files in my local codebase. I also tried to set my environmental variables manually(very unsecure, screenshot below) when configuring the container in my task defination, and that didn't work either.
My next.confg.js
const dotEnvConfig = { path: `../../${process.env.NODE_ENV}.env` };
require("dotenv").config(dotEnvConfig);
module.exports = {
serverRuntimeConfig: {
// Will only be available on the server side
xyzKey: process.env.xyzSecretKey || "",
},
publicRuntimeConfig: {
// Will be available on both server and client
appUrl: process.env.app_url || "",
},
};
So on my local codebase in the root directory I have two files development.env (local api keys) and production.env(live api keys) and my next.config.js is located in /packages/app/next.config.js
So apparently it was just a plain NextJS's way of handling env variables.
In next.config.js
module.exports = {
env: {
user: process.env.SQL_USER || "",
// add all the env var here
},
};
and to call the environmental variable user in the app all you have to do is call process.env.user and user will reference process.env.SQL_USER in my local .env file where it will be stored as SQL_USER="abc_user"
You should be setting the environment variables in the ECS task definition. In order to prevent storing sensitive values in the task definition you should use AWS Parameter Store, or AWS Secrets Manager, as documented here.

Pushing docker image through jenkins

I'm pushing docker image through Jenkins pipeline, but I'm getting the following error:
ERROR: Could not find credentials matching
gcr:["google-container-registry"]
I tried with:
gcr:["google-container-registry"]
gcr:[google-container-registry]
gcr:google-container-registry
google-container-registry
but none of them worked.
In the global credentials I have:
NAME: google-container-registry
KIND: Google Service Account from private key
DESCRIPTION: A Google robot account for accessing Google APIs and
services.
The proper syntax is the following (provided your gcr credentials id is 'google-container-registry'):
docker.withRegistry("https://gcr.io", "gcr:google-container-registry") {
sh "docker push [your_image]"
}
check if you have https://plugins.jenkins.io/google-container-registry-auth/ plugin installed.
After plugin installed use gcr:credential-id synthax
Example:
stage("docker build"){
Img = docker.build(
"gcpProjectId/imageName:imageTag",
"-f Dockerfile ."
)
}
stage("docker push") {
docker.withRegistry('https://gcr.io', "gcr:credential-id") {
Img.push("imageTag")
}
}
Go to Jenkins → Manage Jenkins → Manage Plugins and install plugins:
Google Container Registry
Google OAuth Credentials
CloudBees Docker Build and Publish
Jenkins → Credentials → Global Credentials → Add Credentials, choose desired ‘Project Name’ and upload JSON file
Jenkinsfile:
stage('Deploy Image') {
steps{
script {
docker.withRegistry( 'https://gcr.io', "gcr:${ID}" ) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}

passing aws creds to kitchen ec2 command line

I am trying to do chef cookbook development via Jenkinsfile pipeline. I have my jenkins server running as a container (using jenkinsci/blueocean image). As one of the stages, I am trying to do aws configure and then run kitchen test. For some reason with below code, I am getting unauthorized operation error. For some reason, my AWS creds are not sent properly to .kitchen.yml (No need to check IAM creds, because they have admin access)
stage('\u27A1 Verify Kitchen') {
steps {
sh '''mkdir -p ~/.aws/
echo 'AWS_ACCESS_KEY_ID=...' >> ~/.aws/credentials
echo 'AWS_SECRET_ACCESS_KEY=...' >> ~/.aws/credentials
cat ~/.aws/credentials
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen list
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen test'''
}
}
Is there anyway, I can pass AWS creds here. Also .kitchen.yml no longer supports passing AWS creds inside the file. Is there someway I can pass creds on command i.e. .kitchen.yml access_key=... secret_access_key=... /opt/chefdk/embedded/bin/kitchen test
Really appreciate your help.
You don't need to set KITCHEN_LOCAL_YAML=.kitchen.yml, that's already the primary config file.
You probably want to be using a Jenkins credential file, not hardcoding things into the job. But that said, the reason this isn't working is because the AWS credentials file is not a shell script, which is the syntax you're using there. It's an INI/TOML file and is paired with a similar config file that shares a similar structure.
You should probably just be using the environment variable support in kitchen-ec2 via the withEnv pipeline helper method or similar things for integrating with Jenkins managed credentials.

Getting files from server on AWS Using Jenkins Build

I have installed jenkins on my local machine (on premises). I have my server (Linux) in AWS Cloud. I need to share logs with developers with out giving server access to them. I need to create a jenkins job by running that job they should get the logs from server.
How can i do that ?? If any one following the same process to get the data from cloud please help me in solving this... Thanks in advance.
Use the SSH Agent plugin to securely setup your private key
Use SCP to copy the log files to the local workspace
Archive those files to the Jenkins job
You could write a pipeline script to do this. Something like:
node ("linux") {
sshagent (credentials: ['deploy-dev']) {
sh 'scp user#awshostnamehere:/somepath/somelogfile .'
archive somelogfile
}
}
Note that this requires you to fill in the blanks. To get this to work you would have to:
Setup an SSH private key credential named deploy-dev
Setup a build agent with the label 'linux' or change that to a label of an agent you do have.