Is it possible to add a tag to an image in google cloud container registry without installing docker locally? - google-container-registry

I need to add a tag on an image on a repo in Google Container Registry, and I was wondering whether this could be done without having to install docker locally and doing pull, tag, push on the image. I know how to do so using the UI, but I wanted to automate this process.
Thanks.

If you are looking for a Google Container Registry specific solution, you can use the gcloud container images add-tag command. For example:
gcloud container images add-tag gcr.io/myproject/myimage:mytag1 \
gcr.io/myproject/myimage:mytag2
Reference: https://cloud.google.com/sdk/gcloud/reference/container/images/add-tag
If you want to use code, I'd suggest taking a look at those libraries
Go: https://github.com/google/go-containerregistry
Python: https://github.com/google/containerregistry

If you don't want to use Docker you have the option to use the gcloud command line, you need just to configured Docker to use gcloud as a credential helper:
gcloud auth configure-docker
After that you can List images by their storage location:
gcloud container images list --repository=[HOSTNAME]/[PROJECT-ID]
Or List the versions of an image
gcloud container images list-tags [HOSTNAME]/[PROJECT-ID]/[IMAGE]
And you can as well Tag images
gcloud container images add-tag \
[HOSTNAME]/[PROJECT-ID]/[IMAGE]:[TAG] \
[HOSTNAME]/[PROJECT-ID]/[IMAGE]:[NEW_TAG]
or
gcloud container images add-tag \
[HOSTNAME]/[PROJECT-ID]/[IMAGE]#[IMAGE_DIGEST] \
[HOSTNAME]/[PROJECT-ID]/[IMAGE]:[NEW_TAG]
All the described above can be done trough the UI as well.
For your question "I wanted to automate this process" not sure what are you looking for but you can create a bash script including the gcloud command and run it from you cloud shell

Related

GCP - gcloud commands history for actions done via GUI

When I do something in GCP console (by clicking in GUI), I imagine some gcloud command is executed underneath. Is it possible to view this command?
(I created a notebooks instance on Vertex AI and wanted to know what exactly I should put after gcloud notebooks instances create... to get the same result)
I think it's not possible to view a gcloud command from GUI.
You should test your gcloud command to create another instance alongside the current with all the needed parameters.
When the 2 instances are the same, you know that your gcloud command is ready.
The documentation seems to be clear and complete for this :
https://cloud.google.com/vertex-ai/docs/workbench/user-managed/create-new#gcloud
If it's possible for you, you can also think about Terraform to automate this creation for you with a state management.
Try this for a Python based User Managed Notebook (GUI version of Python instance is using the base image as boot disk, which does not containg Pythong.
The Python suite is installed explicitly via Metadata parameters):
export NETWORK_URI="NETWORK URI"
export SUBNET_URI="SUBNET URI"
export INSTANCE_NAME="instance-name-of-your-liking"
export VM_IMAGE_PROJECT="deeplearning-platform-release"
export VM_IMAGE_FAMILY="common-cpu-notebooks-debian-10"
export MACHINE_TYPE="n1-standard-4"
export LOCATION="europe-west3-b"
gcloud notebooks instances create $INSTANCE_NAME \
--no-public-ip \
--vm-image-project=$VM_IMAGE_PROJECT \
--vm-image-family=$VM_IMAGE_FAMILY \
--machine-type=$MACHINE_TYPE \
--location=$LOCATION \
--network=$NETWORK_URI \
--subnet=$SUBNET_URI \
--metadata=framework=NumPy/SciPy/scikit-learn,report-system-health=true,proxy-mode=service_account,shutdown-script=/opt/deeplearning/bin/shutdown_script.sh,notebooks-api=PROD,enable-guest-attributes=TRUE
To get a list of Network URIs in your project:
gcloud compute networks list --uri
To get a list of Subnet URIs in your project:
gcloud compute networks subnets list --uri
Put the corresponding URIs in between the quotation marks in the first two variables:
export NETWORK_URI="NETWORK URI"
export SUBNET_URI="SUBNET URI"
Name the instance (keep the quotation marks):
export INSTANCE_NAME="instance-name-of-your-liking"
When done copy paste the complete block in your Google Cloud Shell (assuming you are in a correct project).
To additionally enable secure boot (which is a thick box in the GUI setup):
gcloud compute instances stop $INSTANCE_NAME
gcloud compute instances update $INSTANCE_NAME --shielded-secure-boot
Hope it works for you, as it does for me.

Deploying a container from Google Container Registry to a Compute Engine VM

I am trying to deploy a container on a Google VM instance.
From the doc it seems straightforward: specify your image in the container text field and start the VM.
My image is stored in the Google Container Registry in the same project as the VM. However, the VM starts but does not pull and run the docker image. I ssh'ed the VM and docker images ls returns an empty list.
Pulling the image doesn't work.
~ $ docker pull gcr.io/project/image
Using default tag: latest
Error response from daemon: repository gcr.io/project/image not found: does not exist or no pull access
I know we're supposed to use gcloud docker but gcloud isn't installed on the VM (which is dedicated to containers) so I supposed it's something else.
Also, the VM service account has read access to storage. Any idea?
From the GCR docs, you can use docker-credential-gcr to automatically authenticate with credentials from your GCE instance metadata.
To do that manually (assuming you have curl and jq installed):
TOKEN=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google" | jq -r ".access_token")
docker login -u oauth2accesstoken -p "$TOKEN" https://gcr.io
To pull the image from the gcr.io container registry you can use the gcloud sdk, like this:
$ gcloud docker -- pull gcr.io/yrmv-191108/autoscaler
Or you can use the docker binary directly as you did. This command has the same effect as the previous gcloud one:
$ docker pull gcr.io/yrmv-191108/autoscaler
Basically you problem is that you are not specifying either the project you are working nor the image you are trying to pull, unless (very unlikely) your project ID is project and the image you want to pull is named image.
You can get a list of the images you have uploaded to your current project with:
$ gcloud container images list
Which, for me, gets:
NAME
gcr.io/yrmv-191108/autoscaler
gcr.io/yrmv-191108/kali
Only listing images in gcr.io/yrmv-191108. Use --repository to list images in other repositories.
If, for some reason you don't have permissions to install the Gcloud SDK (very advisable for working with Google Cloud) you can see your uploaded images on the Google Cloud GUI by navigating to "Container registry -> images"

Automatic deployment of Docker containers on AWS ECS using Jenkins or Job Scheduler

Currently we build our Docker containers and publish them to Amazon ECR. We have created TaskDefinitions and are able to deploy them manually on an ECS Cluster. So a new deployment involves manual update of the TaskDefinition.
Now we would like to automate the deployment so when a Docker Image is successfully build using Jenkins and published to the ECR repo we would like to replace the current running version with the newly build one.
Next to this we would like to give people the opportunity to launch a specific version of 1 or more combinations of docker containers. Any suggestion on how we could implement a continuous cycle without manually updating the TaskDefinitions?
A more simple solution for this might be to use the ecs-deploy script from here:
https://github.com/silinternational/ecs-deploy
After my container has been built and deployed to dockerhub it's simply a matter of:
ecs-deploy -k $AWS_KEY -s $AWS_SECRET -r $AWS_REGION -c $CLUSTER_NAME -n $SERVICE_NAME -i $DOCKER_IMAGE_NAME
and that does it.
This article describes how to do Continuous deployments to ECS with Jenkins. It uses a shell script after the image has been built and pushed to update an ECS service with a new task definition revision. Hope it helps.

unable to see images or pull from registry

I am unable to see images from the registry
1. gcloud auth login
2. from local machine: gcloud docker push gcr.io/project-id/image-name
3. from VM running docker: gcloud docker images
I see nothing and therefore unable to run any containers - do you know why?
docker images just displays images that have been pulled to the local VM.
Try running gcloud docker pull gcr.io/project-id/image-name to get it onto your VM. Then docker images should show it.
If you are on docker 1.8 or later (see docker version) you can also run: gcloud docker search gcr.io/project-id to see the list of images under project-id.

How to do use Google Container Registry with the docker CLI

Google Container Registry documentation explains that in order to pull and push images to gcr.io, you have to prefix docker push and pull commands with gcloud preview.
gcloud preview docker push gcr.io/<gcr_namespace>/<docker-image>
gcloud preview docker pull gcr.io/<gcr_namespace>/<docker-image>
Is there a way to use Google Container Registry with the docker CLI directly, without gcloud preview prefix?
You can use the following commands:
gcloud preview docker -a
to update your local docker configuration w/ gcr.io credentials.
And then use the regular docker CLI commands to push and pull images:
docker build -t gcr.io/<gcr_namespace>/<docker-image> .
docker push gcr.io/<gcr_namespace>/<docker-image>
Or for existing images:
docker tag <docker-image> gcr.io/<gcr_namespace>/<docker-image>
docker push gcr.io/<gcr_namespace>/<docker-image>
docker pull gcr.io/<gcr_namespace>/<docker-image>
This configuration is good for interoperability with the native docker CLI, but not ideal as gcloud preview docker -a will need to be run again after the credentials expires.
When building a new image, tag it directly to gcr.io during a docker build:
gcloud preview docker -a
docker build -t gcr.io/<project_id>/<docker-image> <directory>
push gcr.io/<project_id>/<docker-image>