GCSFuse not finding default credentials when running a cloud run app docker locally - google-cloud-platform

I am working on mounting a Cloud Storage Bucket to my Cloud Run App, using the example and code from the official tutorial https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse
The application uses docker only (no cloudbuild.yaml)
The docker file compiles with out issue using command:
docker build --platform linux/amd64 -t fusemount .
I then start docker run with the following command
docker run --rm -p 8080:8080 -e PORT=8080 fusemount
and when run gcsfuse is triggered with both the directory endpoint and the bitbucket URL
gcsfuse --debug_gcs --debug_fuse gs://<my-bucket> /mnt/gs
But the connection fails:
022/12/11 13:54:35.325717 Start gcsfuse/0.41.9 (Go version go1.18.4)
for app "" using mount point: /mnt/gcs 2022/12/11 13:54:35.618704
Opening GCS connection...
2022/12/11 13:57:26.708666 Failed to open connection: GetTokenSource:
DefaultTokenSource: google: could not find default credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information.
I have already set up the application-defaut credentials with the following command:
gcloud auth application-default login
and I have a python based cloud function project that I have tested on the same local machine which has no problem accessing the same storage bucket with the same default login credentials.
What am I missing?

Google libraries search for ~/.config/gcloud when using APPLICATION_DEFAULT authorization approach.
Your local Docker container doesn't contain this config when running locally.
So, you might want to mount it when running a container:
$ docker run --rm -v /home/$USER/.config/gcloud:/root/.config/gcloud -p 8080:8080 -e PORT=8080 fusemount
Some notes:
I'm not sure which OS you are using, so that replace /home/$USER with a real path to your home
Same, I'm not sure your image has /root home, so make sure that path from 1. is mounted properly
Make sure your local user is authorized to gcloud cli, as you mentioned, using this command gcloud auth application-default login
Let me know, if this helped.

If you are using docker and not using Google Compute engine (GCE), did you try mounting service account key when running container and using that key while mounting GCSFuse ?
If you are building and deploying to Cloud run, did you grant required permissions mentioned in https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse#ship-code?

Related

GCP Cloud code plugin in intellij "Failed to enable GCP auth addon"

While trying to locally run the "java cloud run hello word sample" Cloud Run: Run Locally
I keep getting
Enabling GCP auth addon...
Failed to enable GCP auth addon. Deployment will continue but GCP
credentials will not be added to minikube. Please ensure you have up
to date application default credentials (ADC) by running gcloud auth login --update-adc
Things that I have tried and didn't solve the problem
run gcloud auth login --update-adc
use a different service account
not provide a service account
provide the environment variable GOOGLE_APPLICATION_CREDENTIALS
Current configuration
What am I missing?
It appears to be a bug with minikube
The workaround:
Install minikube https://minikube.sigs.k8s.io/docs/start/
In your terminal, run minikube delete --all
Delete the existing minikube images from Docker
The original workaround and issue details can be found below:
https://github.com/GoogleCloudPlatform/cloud-code-intellij/issues/2949#issuecomment-870120602

File upload from Docker container to Google Cloud Storage

I'm putting together a CI/CD pipeline on GKE based on
this guide by Google.
Everything is working well, except that as it's a Django project I also need to run the collectstatic command and upload the files to Google Storage.
In the Dockerfile I have the following commands:
RUN python manage.py collectstatic --noinput
RUN gsutil rsync -R /home/vmagent/app/myapp/static gs://mystorage/static
Collectstatic works as expected, but the gsutil upload fails with the following error message:
ServiceException: 401 Anonymous caller does not have storage.objects.create access to mystorage/static/...
What's the best way to authenticate gsutil?
If you are running your docker container on GCP (GKE), you use application credentials to authenticate a pod, which then has the same permissions as that service account. More information about this can be found here. Both GKE and other kubernetes clusters allow you to import key files as secrets. On all Kubernetes platforms this is done with the following commands. The full guide can be found here.
kubectl create secret generic pubsub-key --from-file=key.json=PATH-TO-KEY-FILE.json
Then set the environment variable in your manifest like this:
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
Too bad running pods under a certain role is not yet possible. GCP operates with service accounts, not with roles like AWS. In AWS you can assign a role to a task, which grants a container permissions under that role.

Google Cloud Run deployment not working using `gcloud` SDK CLI

I have a service created on Google Cloud run that I am able to deploy manually through the Google Cloud Console UI using an image on Container registry. But deployment from CLI is failing. Here is the command I am using and the error I get. I am not able to understand what I am missing:
$ gcloud beta run deploy service-name --platform managed --region region-name --image image-url
Deploying container to Cloud Run service [service-name] in project [project-name] region [region-name]
X Deploying...
. Creating Revision...
. Routing traffic...
Deployment failed
ERROR: (gcloud.beta.run.deploy) INVALID_ARGUMENT: The request has errors
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: spec.revisionTemplate.spec.container.ports should be empty
field: spec.revisionTemplate.spec.container.ports
Update 1:
I have updated the SDK using gcloud components update, but I still have the same issue
Here's my SDK Version
$gcloud version
Google Cloud SDK 270.0.0
beta 2019.05.17
bq 2.0.49
core 2019.11.04
gsutil 4.46
I am using a multistage docker build. Here's my Dockerfile:
FROM custom-dev-image
COPY . /project_dir
WORKDIR /project_dir
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
/usr/local/bin/go build -a \
-ldflags '-w -extldflags "-static"' \
-o /root/go/bin/executable ./cmds/project/main.go
FROM alpine:3.10
ENV GIN_MODE=release APP_NAME=project_name
COPY --from=0 /root/go/bin/executable /usr/local/bin/
CMD executable
I had this same problem and I assume it was because I had older Cloud Run deployment that was created before I had ran gcloud components update since some update.
I was able to fix it by deleting the whole Cloud Run service (through the GUI) and deploying it from scratch again (via terminal). I noticed that the ports: definition disappeared from the YAML once I did this.
After this I could do deployments normally.
This was a bug in Cloud Run. It has been fixed and deploying with CLI is working for me now. Here's the link to the issue I had raised with Google Cloud which has a response from them https://issuetracker.google.com/issues/144069696.

Connect to Memorystore from Cloud Run

I want to run a service on Google Cloud Run that uses Cloud Memorystore as cache.
I created an Memorystore instance in the same region as Cloud Run and used the example code to connect: https://github.com/GoogleCloudPlatform/golang-samples/blob/master/memorystore/redis/main.go this didn't work.
Next I created a Serverless VPC access Connectore which didn't help. I use Cloud Run without a GKE Cluster so I can't change any configuration.
Is there a way to connect from Cloud Run to Memorystore?
To connect Cloud Run (fully managed) to Memorystore you need to use the mechanism called "Serverless VPC Access" or a "VPC Connector".
As of May 2020, Cloud Run (fully managed) has Beta support for the Serverless VPC Access. See Connecting to a VPC Network for more information.
Alternatives to using this Beta include:
Use Cloud Run for Anthos, where GKE provides the capability to connect to Memorystore if the cluster is configured for it.
Stay within fully managed Serverless but use a GA version of the Serverless VPC Access feature by using App Engine with Memorystore.
While waiting for serverless VPC connectors on Cloud Run - Google said yesterday that announcements would be made in the near term - you can connect to Memorystore from Cloud Run using an SSH tunnel via GCE.
The basic approach is the following.
First, create a forwarder instance on GCE
gcloud compute instances create vpc-forwarder --machine-type=f1-micro --zone=us-central1-a
Don't forget to open port 22 in your firewall policies (it's open by default).
Then install the gcloud CLI via your Dockerfile
Here is an example for a Rails app. The Dockerfile makes use of a script for the entrypoint.
# Use the official lightweight Ruby image.
# https://hub.docker.com/_/ruby
FROM ruby:2.5.5
# Install gcloud
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
# Generate SSH key to be used by the SSH tunnel (see entrypoint.sh)
RUN mkdir -p /home/.ssh && ssh-keygen -b 2048 -t rsa -f /home/.ssh/google_compute_engine -q -N ""
# Install bundler
RUN gem update --system
RUN gem install bundler
# Install production dependencies.
WORKDIR /usr/src/app
COPY Gemfile Gemfile.lock ./
ENV BUNDLE_FROZEN=true
RUN bundle install
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
CMD ["bash", "entrypoint.sh"]
Finally open an SSH tunnel to Redis in your entrypoint.sh script
# !/bin/bash
# Memorystore config
MEMORYSTORE_IP=10.0.0.5
MEMORYSTORE_REMOTE_PORT=6379
MEMORYSTORE_LOCAL_PORT=6379
# Forwarder config
FORWARDER_ID=vpc-forwarder
FORWARDER_ZONE=us-central1-a
# Start tunnel to Redis Memorystore in background
gcloud compute ssh \
--zone=${FORWARDER_ZONE} \
--ssh-flag="-N -L ${MEMORYSTORE_LOCAL_PORT}:${MEMORYSTORE_IP}:${MEMORYSTORE_REMOTE_PORT}" \
${FORWARDER_ID} &
# Run migrations and start Puma
bundle exec rake db:migrate && bundle exec puma -p 8080
With the solution above Memorystore will be available to your application on localhost:6379.
There are a few caveats though
This approach requires the service account configured on your Cloud Run service to have the roles/compute.instanceAdmin role, which is quite powerful.
The SSH keys are backed into the image to speedup container boot time. That's not ideal.
There is no failover if your forwarder crashes.
I've written a longer and more elaborated approach in a blog post that improves the overall security and adds failover capabilities. The solution uses plain SSH instead of the gcloud CLI.
If you need something in your VPC, you can also spin up Redis on Compute Engine
It's more costly (especially for a Cluster) than Redis Cloud - but an temp solution if you have to keep the data in your VPC.

Deploying a container from Google Container Registry to a Compute Engine VM

I am trying to deploy a container on a Google VM instance.
From the doc it seems straightforward: specify your image in the container text field and start the VM.
My image is stored in the Google Container Registry in the same project as the VM. However, the VM starts but does not pull and run the docker image. I ssh'ed the VM and docker images ls returns an empty list.
Pulling the image doesn't work.
~ $ docker pull gcr.io/project/image
Using default tag: latest
Error response from daemon: repository gcr.io/project/image not found: does not exist or no pull access
I know we're supposed to use gcloud docker but gcloud isn't installed on the VM (which is dedicated to containers) so I supposed it's something else.
Also, the VM service account has read access to storage. Any idea?
From the GCR docs, you can use docker-credential-gcr to automatically authenticate with credentials from your GCE instance metadata.
To do that manually (assuming you have curl and jq installed):
TOKEN=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google" | jq -r ".access_token")
docker login -u oauth2accesstoken -p "$TOKEN" https://gcr.io
To pull the image from the gcr.io container registry you can use the gcloud sdk, like this:
$ gcloud docker -- pull gcr.io/yrmv-191108/autoscaler
Or you can use the docker binary directly as you did. This command has the same effect as the previous gcloud one:
$ docker pull gcr.io/yrmv-191108/autoscaler
Basically you problem is that you are not specifying either the project you are working nor the image you are trying to pull, unless (very unlikely) your project ID is project and the image you want to pull is named image.
You can get a list of the images you have uploaded to your current project with:
$ gcloud container images list
Which, for me, gets:
NAME
gcr.io/yrmv-191108/autoscaler
gcr.io/yrmv-191108/kali
Only listing images in gcr.io/yrmv-191108. Use --repository to list images in other repositories.
If, for some reason you don't have permissions to install the Gcloud SDK (very advisable for working with Google Cloud) you can see your uploaded images on the Google Cloud GUI by navigating to "Container registry -> images"