Google Cloud Run / Mounting Google Storage Bucket - google-cloud-platform

From a Google Cloud Run docker registry associated container, when I try to mount a Google Storage Bucket, the following is what I receive. Obviously without having a privileged docker execution this is expected, and as far as I have investigated, "Google Cloud Run" instances are not meant to support privileged container execution like Google Compute Engine.
Yet I am still asking if anyone has any other knowledge about this, is there any other way to mount a bucket via Google Run container ?
Opening GCS connection... Opening bucket... Mounting file system...
daemonize.Run: readFromProcess: sub-process: mountWithArgs:
mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: fuse device not found, try 'modprobe fuse' first

Posting this as a Community Wiki as it's based on the comments of #JohnHanley and #SuperEye:
Based on what you mentioned:
My docker images are not web services.
If this is the case, you cannot use Cloud Run for what you are trying to do. Cloud Run is an HTTP Request/Response system. Your container must respond to HTTP requests, otherwise it will be terminated.
Also, for your other comment:
Google Compute Engine cannot run docker images from Container Registry
That is an incorrect assumption. Compute Engine supports Container Registry.
In conclusion, for your final goal of mounting a bucket as a file system, Cloud Run does not support that ability. An alternative is to use App Engine Flex.

Related

How to Deploy a docker container with volume in Cloud Run

I am trying to publish an application I wrote in .NET Core with docker and a mounted volume. I can't really figure out or see any clear solution to my issue that will be cheap (Its for a university project.)
I tried running a docker-compose via a cloudbuild.yml linked in this post with no luck, also tried to put my dbfile in a firebase project and tried to access it via the program but it didn't work. I also read in the GCP documentation that i can probably use Filestore but the pricing is way out of budget for me. I need to publish an SQLite so my server can work correctly, that's it.
Any help would be really apreciated!
Basically, you can't mount volume in Cloud Run. It's a stateless environment and you can't persist data on it. You have to use external storage to persist your data. See the runtime contract
WIth the 2nd execution runtime environment, you can now mount Cloud Storage bucket with GCSFuse, and Filestore path with NFS

How to mount persistent storage to Google Cloud Run?

I was trying to run a Docker image with Cloud run and realised that there is no option for adding a persistent storage. I found a list of services in https://cloud.google.com/run/docs/using-gcp-services#connecting_to_services_in_code but all of them are access from code. I was looking to share volume with persistent storage. Is there a way around it ? Is it because persistent storage might not work shared between multiple instances at the same time ? Is there are alternative solution ?
Cloud Run is serverless: it abstracts away all infrastructure management.
Also is a managed compute platform that automatically scales your stateless containers.
Filesystem access The filesystem of your container is writable and is
subject to the following behavior:
This is an in-memory filesystem, so writing to it uses the container
instance's memory. Data written to the filesystem does not persist
when the container instance is stopped.
You can use Google Cloud Storage, Firestore or Cloud SQL if your application is stateful.
3 Great Options for Persistent Storage with Cloud Run
What's the default storage for Google Cloud Run?
Cloud Run (fully managed) has known services that's not yet supported including Filestore which is also a persistent storage. However, you can consider running your Docker image on Cloud Run Anthos which runs on GKE and there you can use persistent volumes which are typically backed by Compute Engine persistent disks.
Having persistent storage in (fully managed) Cloud Run should be possible now.
Cloud Run's second generation execution environment (gen2) supports network mounted file systems.
Here are some alternatives:
Cloud Run + GCS: Using Cloud Storage FUSE with Cloud Run tutorial
Cloud Run + Filestore: Using Filestore with Cloud Run tutorial
If you need help deciding between those, check this:
Design an optimal storage strategy for your cloud workload
NOTE: At the time of this answer, Cloud Run gen2 is in Preview.

Pods can't pull image from GCR after configuring google cloud sql proxy

I have a simple application (REST apis based on python and flask) that works well on Google kubernetes engine (GKE). My CI/CD setups create a docker image, push it to Google cloud registry (GCR) and then deploy it to GKE. Everything works well.
Now, I added a database. It will be hosted on Google cloud SQL. To accees the database from kubernetes, I'm using google cloud sql proxy (as a side car) and workload identity as recommended by google.
My problem is, after configuring cloud sql proxy, I'm getting this error:
ImagePullBackOff: Cannot pull image 'gcr.io/xxx-project/xxx-image:xxx-tag' from the registry.
the cloud sql proxy image is loaded correctly (I think because it's hosted in a public registry), but not my image, so the pod keeps crashing.
Something I missed? should I add docker credentials? It's weird because it was working before setting the cloud proxy!!
Many thanks for your help,
Best regards
I think there's something important to understand here and it's that Autopilot doesn't use Workload Identity or anything to do with the pod's permissions to pull images. It uses the default compute service account for your project.
It is the nodes that need permission to pull images, not the pods. See this note from the GCP documentation on Workload Identity.
Note: Even with Workload Identity enabled, GKE still uses the configured Google Service Account for the node pool to pull container images from the image registry. If you encounter ImagePullBackOff or ErrImagePull errors, check the troubleshooting documentation.
I had the same thing happen to me and it turned out that the default compute service account had been deleted. It restored it (using these instructions Deleted Compute Engine default service account) and gave it storage.admin permissions and that resolved the issue.

AWS ECS container logs design pattern

I have a classic scala app, it produces three different logs in the location
/var/log/myapp/log1/mylog.log
/var/log/myapp/log2/another.log
/var/log/myapp/log3/anotherone.log
I containerized the app and working fine, I can get those logs by docker volume mount.
Now the app/container will be deployed in AWS ECS with auto scaling group. in this case multiple container may run on one single ecs host.
I would like to use cloud watch to monitor my application logs.
One solution could be put aws log agent inside my application container.
Is there any better way to get those application logs from container to cloudwatch log.
help is very much appreciated.
When using docker, the recommended approach is to not log to files, but to send logs to stdout and stderr. Doing so prevents the logs from being written to the container's filesystem, and (depending on the logging driver in use), allows you to view the logs using the docker logs / docker container logs subcommand.
Many applications have a configuration option to log to stdout/stderr, but if that's not an option, you can create a symlink to redirect output; for example, the official NGINX image on Docker Hub uses this approach.
Docker supports logging drivers, which allow you to send logging to (among others) AWS cloud watch. After you modified your image to make it log to stdout/stderr, your can configure the AWS logging driver.
More information about logging in Docker can be found in the "logging" section in the documentation
You don't need log agent if you can change the code.
You can directly publish Custom Metric Data into ColudWatch like this page said: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-cloudwatch-publish-custom-metrics.html

Deploying a custom build of Datalab to Google Cloud platform

For a project we are trying to expand Google Cloud Datalab and deploy the modified version to the Google Cloud platform. As I understand it, the deploying process normally consists of the following steps:
Build the Docker image
Push it to the Container Registry
Use the container parameter with the Google Cloud deployer to specify the correct Docker image, as explained here.
Since the default container registry, i.e. gcr.io/cloud_datalab/datalab:<tag> is off-limits for non-Datalab contributors, we pushed the Docker image to our own container registry, i.e. to gcr.io/<project_id>/datalab:<tag>.
However, the Google Cloud deployer only pulls directly from gcr.io/cloud_datalab/datalab:<tag> (with the tag specified by the containerparameter) and does not seem to allow specification of the source container registry. The deployer does not appear to be open-source, leaving us with no way to deploy our image to Google Cloud.
We have looked into creating a custom deployment similar to the example listed here but this never starts Datalab, so we suspect the start script is more complicated.
Question: How can we deploy a Datalab image from our own container registry to Google Cloud?
Many thanks in advance.
The deployment parameters can be guessed but it is easier to get the Google Cloud Datalab deployment script by sshing to the temporary compute node that is responsible for deployment and browsing the /datalab folder. This contains a runtime configuration file for use with the App Engine Flexible Environment. Using this configuration file, the google preview app deploy command (which accepts an --image parameter for Docker images) will deploy this to the App Engine correctly.