For a project we are trying to expand Google Cloud Datalab and deploy the modified version to the Google Cloud platform. As I understand it, the deploying process normally consists of the following steps:
Build the Docker image
Push it to the Container Registry
Use the container parameter with the Google Cloud deployer to specify the correct Docker image, as explained here.
Since the default container registry, i.e. gcr.io/cloud_datalab/datalab:<tag> is off-limits for non-Datalab contributors, we pushed the Docker image to our own container registry, i.e. to gcr.io/<project_id>/datalab:<tag>.
However, the Google Cloud deployer only pulls directly from gcr.io/cloud_datalab/datalab:<tag> (with the tag specified by the containerparameter) and does not seem to allow specification of the source container registry. The deployer does not appear to be open-source, leaving us with no way to deploy our image to Google Cloud.
We have looked into creating a custom deployment similar to the example listed here but this never starts Datalab, so we suspect the start script is more complicated.
Question: How can we deploy a Datalab image from our own container registry to Google Cloud?
Many thanks in advance.
The deployment parameters can be guessed but it is easier to get the Google Cloud Datalab deployment script by sshing to the temporary compute node that is responsible for deployment and browsing the /datalab folder. This contains a runtime configuration file for use with the App Engine Flexible Environment. Using this configuration file, the google preview app deploy command (which accepts an --image parameter for Docker images) will deploy this to the App Engine correctly.
Related
Context
I'm working on an app. The code is in a Cloud Source Repository. I've set up a Build Trigger with Cloud Build so that when I push new commits, the app is automatically built: it's containerized and the image is pushed to the Artifact Registry.
I also have a Compute Engine VM instance with a Container-Optimized OS. It's set up to use my app's container image. So when I start the VM, it pulls the latest image from the Artifact Registry and runs the container.
Issue
So, currently, deploying involves two steps:
Pushing new commits, which updates the container in the Artifact Registry.
Restarting my VM, which pulls the new container from the Artifact Registry.
Is there a way to combine these two steps?
Build Triggers detect code changes to trigger builds. Is there similar way for automatically triggering deployments from the Artifact Registry to Compute Engine?
Thank you.
Why when I use the Visual Studio Code extension "Cloud Code", to deploy a Cloud Run service, it seems to store the image contents in Cloud Storage (via Container Registry).
Can I make it store the image in the Google Cloud Artifact Registry instead?
I just tried the scenario and it worked for me! Following these steps should get you going.
Create an artifact registry repo at https://console.cloud.google.com/artifacts and setup docker auth on your client to use gcloud to authenticate the repo. You can find detailed steps to do this here.
When deploying to Cloud Run in Cloud Code, you'll find that it will default to a Container Registry repo as the "Container image URL", but you can easily use an artifact registry repo here instead. Here, you can paste the repo name you created in the previous step, and append an image name. Here's a screenshot of the example I just tested.
From a Google Cloud Run docker registry associated container, when I try to mount a Google Storage Bucket, the following is what I receive. Obviously without having a privileged docker execution this is expected, and as far as I have investigated, "Google Cloud Run" instances are not meant to support privileged container execution like Google Compute Engine.
Yet I am still asking if anyone has any other knowledge about this, is there any other way to mount a bucket via Google Run container ?
Opening GCS connection... Opening bucket... Mounting file system...
daemonize.Run: readFromProcess: sub-process: mountWithArgs:
mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: fuse device not found, try 'modprobe fuse' first
Posting this as a Community Wiki as it's based on the comments of #JohnHanley and #SuperEye:
Based on what you mentioned:
My docker images are not web services.
If this is the case, you cannot use Cloud Run for what you are trying to do. Cloud Run is an HTTP Request/Response system. Your container must respond to HTTP requests, otherwise it will be terminated.
Also, for your other comment:
Google Compute Engine cannot run docker images from Container Registry
That is an incorrect assumption. Compute Engine supports Container Registry.
In conclusion, for your final goal of mounting a bucket as a file system, Cloud Run does not support that ability. An alternative is to use App Engine Flex.
I'm relatively new to Google Kubernetes Engine and Google cloud platform.
I managed to use and connect the following services.
Source Repositories
Cloud Builder and Container Registry
Kubernetes
Engine
I'm currently using git bash on my local machine to push it to Google source repositories. Google Cloud Build builds the image and creates a new artifact. Each time I change my app and push the changes to cloud repositories a new artifact is created. I would then copy the new artifact to Kubernetes Workloads Rolling Update
Is there a better way to automate this? e.g. CD/CI without
You can set the rolling update strategy in your deployment spec from the beginning.
You can then use Cloud Build to push new images to your cluster once the image has been built instead of manually going to the GKE console and update the image.
Deployed a django project in gcloud in flexible environment from local using gcloud app deploy.
The changes are getting reflected in the live url.
I am trying to access the deployed django project folder through gcloud shell, but not able to find it.
What am i doing wrong ?
Extended from discussion with #babygameover.
Google App Engine(GAE) is a PaaS. in GAE, one could just code locally and deploy the project, while the scaling of instances and its related resources would be taken care by gcloud.
And to have control over instances, the project should be moved into Google Compute Engine(GCE) where one would get finer control over instance configurations.