How to get a specific tag from Google GCR with regex using gcloud - google-cloud-platform

I'm tagging my images with git_short_sha and branch_name and some other tags. I'm trying to get the short_sha tag of the image tagged with master using a gcloud command.
I've managed to get all tags, or a specific tag by its' location with these commands:
1. gcloud container images list-tags eu.gcr.io/${PROJECT_ID}/${IMAGE} --format='value(tags[1])' --filter="tags=master"
result:
76f1a2a
but I cannot be sure that the second element will always be the short_sha.
2. gcloud container images list-tags eu.gcr.io/${PROJECT_ID}/${IMAGE} --format='value(tags)' --filter="tags=master"
result:
1.0.0-master,76f1a2a,76f1a2a-master,master
Is it possible to get only the short_sha tag by using only gcloud command?

Try:
gcloud container images list-tags eu.gcr.io/${PROJECT_ID}/${IMAGE} \
--flatten="[].tags[]" \
--filter="tags=${TAG}"
This came up once before. If you encounter that bug, you can try the jq solution.

Related

What does `gcloud compute instances create` do? - POST https://compute.googleapis.com…

Some things are very easy to do with the gcloud CLI, like:
$ export network='default' instance='example-instance' firewall='ssh-http-icmp-fw'
$ gcloud compute networks create "$network"
$ gcloud compute firewall-rules create "$firewall" --network "$network" \
--allow 'tcp:22,tcp:80,icmp'
$ gcloud compute instances create "$instance" --network "$network" \
--tags 'http-server' \
--metadata \
startup-script='#! /bin/bash
# Installs apache and a custom homepage
apt update
apt -y install apache2
cat <<EOF > /var/www/html/index.html
<html><body><h1>Hello World</h1>
<p>This page was created from a start up script.</p>
</body></html>'
$ # sleep 15s
$ curl $(gcloud compute instances list --filter='name=('"$instance"')' \
--format='value(EXTERNAL_IP)')
(to be exhaustive in commands, tear down with)
$ gcloud compute networks delete -q "$network"
$ gcloud compute firewall-rules delete -q "$firewall"
$ gcloud compute instances delete -q "$instance"
…but it's not clear what the equivalent commands are from the REST API side. Especially considering the HUGE number of options, e.g., at https://cloud.google.com/compute/docs/reference/rest/v1/instances/insert
So I was thinking to just steal whatever gcloud does internally when I write my custom REST API client for Google Cloud's Compute Engine.
Running rg I found a bunch of these lines:
https://github.com/googleapis/google-auth-library-python/blob/b1a12d2/google/auth/transport/requests.py#L182
Specifically these 5 in lib/third_party:
google/auth/transport/{_aiohttp_requests.py,requests.py,_http_client.py,urllib3.py}
google_auth_httplib2/__init__.py
Below each of them I added _LOGGER.debug("With body: %s", body). But there seems to be some fancy batching going on because I almost never get that With body line 😞
Now messing with Wireshark to see what I can find, but I'm confident this is a bad rabbit hole to fall down. Ditto for https://console.cloud.google.com/home/activity.
How can I find out what body is being set by gcloud?
Add the command line option --log-http to see the REST API parameters.
There is no simple answer as the CLI changes over time. New features are added, removed, etc.

How to authenticate a gcloud service account from within a docker container

I’m trying to create a docker container that will execute a BigQuery query. I started with the Google provided image that had gcloud already and I add my bash script that has my query. I'm passing my service account key as an environment file.
Dockerfile
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:latest
COPY main.sh main.sh
main.sh
gcloud auth activate-service-account X#Y.iam.gserviceaccount.com --key-file=/etc/secrets/service_account_key.json
bq query --use_legacy_sql=false
The gcloud command successfully authenticates but can't save to /.config/gcloud saying it is read-only. I've tried modifying that folders permissions during build and struggling to get it right.
Is this the right approach or is there a better way? If this is the right approach, how can I get ensure gcloud can write to the necessary folder?
See the example at the bottom of the Usage section.
You ought to be able to combine this into a single docker run command:
KEY="service_account_key.json"
echo "
[auth]
credential_file_override = /certs/${KEY}
" > ${PWD}/config
docker run \
--detach \
-env=CLOUDSDK_CONFIG=/config \
--volume=${PWD}/config:/config \
--volume=/etc/secrets/${KEY}:/certs/${KEY} \
gcr.io/google.com/cloudsdktool/cloud-sdk:latest \
bq query \
--use_legacy_sql=false
Where:
--env set the container's value for CLOUDSDK_CONFIG which depends on the first --volume flag which maps the host's config that we created in ${PWD} to the container's /config.
The second --volume flag maps the host's /etc/secrets/${KEY} (per your question) to the container's /certs/${KEY}. Change as you wish.
Suitably configured (🤞), you can run bq
I've not tried this but that should work :-)

gcloud cli filter format and limit behaviour

I experience a very strange behaviour with the --filter --formatand --limit flags.
I have the following command:
gcloud run revisions list --sort-by=~creationTimestamp --service "api-gateway" --platform managed --format="value(metadata.name)" --filter="spec.containers.env.name=ENDPOINTS_SERVICE_NAME"
The command returns me this list with in total 177 items:
api-gateway-00295-xeb 2020-07-21T06:46:14.991421Z
api-gateway-00283-wug 2020-07-20T14:41:02.108809Z
api-gateway-00281-yix 2020-07-20T14:32:17.325634Z
api-gateway-00278-ham 2020-07-20T12:50:13.385984Z
api-gateway-00276-mol 2020-07-17T12:21:36.897245Z
api-gateway-00274-nih 2020-07-16T07:50:18.544546Z
api-gateway-00272-kol 2020-07-13T12:55:35.485589Z
api-gateway-00270-vis 2020-07-13T08:38:52.352422Z
api-gateway-00263-zaf 2020-07-10T14:08:36.502972Z
...
The first thing is, that the timestamp is returned for a strange reason. (I actually state what I want to get with --format and when I remove the --sort-by flag the timestamp is gone.)
Secondly, when I add --limit 1 no result is returned at all!
gcloud run revisions list --sort-by=~creationTimestamp --service "api-gateway" --platform managed --format="value(metadata.name)" --filter="spec.containers.env.name=ENDPOINTS_SERVICE_NAME" --limit 1
With --limit 5 only two are returned, so as a result it cloud be that the limit is applied before filtering, although the documentation says that is should be the other way around.
However the "latest" entry is api-gateway-00295-xeb and should be returned with a limit of 1.
I don't understand the behaviour of the gcloud CLI here.
Does anyone have explanations for the two things?
As #DazWilkin suggested I created an issue at the public google issue tracker here:
https://issuetracker.google.com/issues/161833506
The Cloud SDK engineering team is looking into this, however there is no ETA.

How to get list of all docker-machine images for google cloud

I'm creating docker-machines in Google Cloud with the shell command
docker-machine create --driver google \
--google-project my-project \
--google-zone my-zone \
--google-machine-image debian-cloud/global/images/debian-10-buster-v20191210 \
machine-name
As you see, i use the image debian-10-buster-v20191210. But I want to switch the version of the image to a less recent one. And the problem is that I can't find the place where the list of
such images (debian-10-buster-v*) can be found. Can you please help me to find the place?
Determining a list of available images can be done using the gcloud command line.
--show deprecated indicates you want to see ALL images, not just the latest
--filter= only selects images with a starting name of debian-10-buster
$ gcloud compute images list --filter="name=debian-10-buster" --show-deprecated
NAME PROJECT FAMILY DEPRECATED STATUS
debian-10-buster-v20191115 debian-cloud debian-10 DEPRECATED READY
debian-10-buster-v20191121 debian-cloud debian-10 DEPRECATED READY
debian-10-buster-v20191210 debian-cloud debian-10 READY
You can find additional information in the gcloud Images List documentation.

Delete untagged images on Google Cloud Registry [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
When we push repeatedly into gcr.io with the same image name and a version (tag), there is a high number of untagged images.
Is there a simple way to remove all untagged images for a single image using the gcloud CLI tool to avoid incurring Storage costs?
gcloud container images list-tags gcr.io/project-id/repository --format=json --limit=unlimited will give you an easily consumable json blob of info for images in a repo (such as digests w/ associated tags).
In order to just enumerate all digests which lack tags:
gcloud container images list-tags gcr.io/project-id/repository --filter='-tags:*' --format='get(digest)' --limit=unlimited
Which you can iterate over and delete with:
gcloud container images delete --quiet gcr.io/project-id/repository#DIGEST
Handy when you glue them together with awk and xargs
gcloud container images list-tags gcr.io/${PROJECT_ID}/${IMAGE} --filter='-tags:*' --format='get(digest)' --limit=unlimited | awk '{print "gcr.io/${PROJECT_ID}/${IMAGE}#" $1}' | xargs gcloud container images delete --quiet
The previous one-liner was not working for me. I am using this one at the moment:
Deletes all the un-tagged images for a given PROJECT_ID and IMAGE
Uses Xargs -I, which creates a token (called {arg} in this case)
gcloud container images list-tags gcr.io/${PROJECT_ID}/${IMAGE} \
--filter='-tags:*' --format='get(digest)' --limit=unlimited |\
xargs -I {arg} gcloud container images delete \
"gcr.io/${PROJECT_ID}/${IMAGE}#{arg}" --quiet
My use case was to delete all untagged images from a specific project. I wrote simple script to achieve this:
delete_untagged() {
echo " |-Deleting untagged images for $1"
while read digest; do
gcloud container images delete $1#$digest --quiet 2>&1 | sed 's/^/ /'
done < <(gcloud container images list-tags $1 --filter='-tags:*' --format='get(digest)' --limit=unlimited)
}
delete_for_each_repo() {
echo "|-Will delete all untagged images in $1"
while read repo; do
delete_untagged $repo
done < <(gcloud container images list --repository $1 --format="value(name)")
}
delete_for_each_repo gcr.io/<project-id>/<repository>
The full script can be found here: https://gist.github.com/lahsivjar/2b011d69368a26af7043d4aa70ec78f8
Hope it will be helpful to someone
It is documented here but one important thing to be noted is that
DIGEST must be of the form "sha256:<digest>"
So catch first the form sha256:<digest> of untagged images
$ DIGEST=`gcloud container images list-tags gcr.io/[PROJECT-ID]/[IMAGE] \
--filter='-tags:*' --format='get(digest)'`
$ echo $DIGEST
sha256:7c077a9ca45aea7134d8436a3071aceb5fa62758cc86eadec63f02692b7875f7
Then use the variable to remove it
$ gcloud container images delete --quiet gcr.io/[PROJECT-ID]/[IMAGE]#$DIGEST
Digests:
- gcr.io/[PROJECT-ID]/[IMAGE]#sha256:7c077a9ca45a......
Deleted [gcr.io/[PROJECT-ID]/[IMAGE]#sha256:7c077a9ca45a......].
Powershell version of the one liner posted by #Benos
gcloud container images list-tags gcr.io/myprojectname/myimagename --filter='-tags:*' --format='get(tags)' --limit=unlimited | ForEach-Object { gcloud container images delete "gcr.io/myprojectname/myimagename:$PSItem" --quiet }
Remove the --filter='-tags:*' to delete all tags of a certain image (this is what I was trying to accomplish)
Corrected Powershell Command
gcloud container images list-tags gcr.io/myprojectname/myimagename --filter='-tags:*' --format='get(digest)' --limit=unlimited | ForEach-Object { gcloud container images delete "gcr.io/myprojectname/myimagename#$PSItem" --quiet }