How to get latest version of an image from artifact registry - google-cloud-platform

is there a command (gcloud) that return the latest fully qualified name of an image from Artifact registry

Try:
PROJECT=
REGION=
REPO=
IMAGE=
gcloud artifacts docker images list \
${REGION}-docker.pkg.dev/${PROJECT}/${REPO} \
--filter="package=${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}" \
--sort-by="~UPDATE_TIME" \
--limit=1 \
--format="value(format("{0}#{1}",package,version))"
Because:
Filters the list for a specific image
Sorts the results descending (~) by UPDATE_TIME1
Only takes 1 value i.e. the most recent
Outputs the results as {package}#{version}
1 -- Curiously, --sort-by uses the output (!) field name not the underlying type (surfaced by e.g. --format=json or --format=yaml) name.

Many thanks to the previous answer, I use it to remove the tag "latest" of my last pushed artifact. I then add it when I push another. Leaving here if anyone interested.
Doc : https://cloud.google.com/artifact-registry/docs/docker/manage-images#tag
Remove tag :
gcloud artifacts docker tags delete \
$(gcloud artifacts docker images list ${REGION}-docker.pkg.dev/\
${PROJECT}/${REPO}/${IMAGE}/\
--filter="package=${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}"\
--sort-by="~UPDATE_TIME" --limit=1 --format="value(format("{0}",package))"):latest
Add tag:
gcloud artifacts docker tags add \
$(gcloud artifacts docker images list \
${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}/ \
--filter="package=${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}" \
--sort-by="~UPDATE_TIME" --limit=1 \
--format="value(format("{0}#{1}",package,version))") \
$(gcloud artifacts docker images list \
${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}/ \
--filter="package=${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}" \
--sort-by="~UPDATE_TIME" --limit=1 \
--format="value(format("{0}",package))"):latest

Related

How to escape slash in gcloud format / filter command?

I would like to filter Cloud Run revisions by its container image.
When I run this gcloud run revisions command,
gcloud beta run revisions list --service sample-service --region=asia-northeast1 --limit=5 --sort-by="~DEPLOYED" --format="json"
it will output following json
[
{
"apiVersion": "serving.knative.dev/v1",
"kind": "Revision",
"metadata": {
"annotations": {
"autoscaling.knative.dev/maxScale": "1",
"client.knative.dev/user-image": "asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1",
"run.googleapis.com/client-name": "gcloud",
"run.googleapis.com/client-version": "383.0.1", #
I tried to filter revisions by --filter options, but it raises an error.
gcloud beta run revisions list --service it-sys-watch --region=asia-northeast1 --limit=1 --sort-by="~DEPLOYED" --filter='metadata.annotations.client.knative.dev/user-image=asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1'
ERROR: (gcloud.beta.run.revisions.list) Non-empty key name expected [metadata.annotations.client.knative.dev *HERE* /user-image=asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1].
Neither adding backslash nor double slashes won't work
gcloud beta run revisions list --service it-sys-watch --region=asia-northeast1 --limit=1 --sort-by="~DEPLOYED" --filter='metadata.annotations.client.knative.dev\/user-image=asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1'
WARNING: The following filter keys were not present in any resource : metadata.annotations.client.knative.dev\/user-image
Listed 0 items.
gcloud beta run revisions list --service it-sys-watch --region=asia-northeast1 --limit=1 --sort-by="~DEPLOYED" --filter='metadata.annotations.client.knative.dev//user-image=asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1'
ERROR: (gcloud.beta.run.revisions.list) Non-empty key name expected [metadata.annotations.client.knative.dev *HERE* //user-image=asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1].
gcloud --format options also does not work with backslash keys.
Is there any idea to help filtering key with slashes?
Try:
gcloud beta run revisions list \
--service=it-sys-watch \
--region=asia-northeast1 \
--sort-by="~DEPLOYED" \
--filter='metadata.annotations["client.knative.dev/user-image"]="asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1"'
NOTE You need to drop the --limit=1 too though this conflicts with the documentation that suggests that limit is applied after filter
gcloud ... --filter=... --limit=1 | jq 'length' yields 0
gcloud ... --filter=... | jq 'length' yields 1
Let's see what Google Engineering says: 231192444

unable to create a gcloud alert policy in command line with multiple conditions

I am trying to create a single alert policy for Cloud-Sql instance_state through gcloud with multiple conditions.
If the instance is in "RUNNABLE" OR "FAILED" state for more than 5 minutes, then a alert should be triggerred. I was able to create that in console and below is the screenshot:
Now I try the same using the command line and give this gcloud command:
gcloud alpha monitoring policies create \
--display-name='Test Database State Alert ('$PROJECTID')' \
--condition-display-name='Instance is not running for 5 minutes'\
--notification-channels="x23234dfdfffffff" \
--aggregation='{"alignmentPeriod": "60s","perSeriesAligner": "ALIGN_COUNT_TRUE"}' \
--condition-filter='metric.type="cloudsql.googleapis.com/database/instance_state" AND resource.type="cloudsql_database" AND (metric.labels.state = "RUNNABLE")'
OR 'metric.type="cloudsql.googleapis.com/database/instance_state" AND resource.type="cloudsql_database" AND (metric.labels.state = "FAILED")' \
--duration='300s' \
--if='> 0.0' \
--trigger-count=1 \
--combiner='OR' \
--documentation='The rule "${condition.display_name}" has generated this alert for the "${metric.display_name}".' \
--project="$PROJECTID" \
--enabled
I am getting the error below in the OR part of the condition:
ERROR: (gcloud.alpha.monitoring.policies.create) unrecognized arguments:
OR
metric.type="cloudsql.googleapis.com/database/instance_state" AND resource.type="cloudsql_database" AND (metric.labels.state = "FAILED")
Even if i put ( ) over the condition still it fails, also the || operator also fails.
Can anyone please tell me the correct gcloud command for this? Also i want the structure of the alert policy to be similar to the one created in cloud-console as shown above
Thanks
I was able to use gcloud alpha monitoring policies conditions create to append additional conditions.
gcloud alpha monitoring policies create \
--notification-channels=projects/qwiklabs-gcp-04-d822dd6cd419/notificationChannels/2510735656842641871 \
--aggregation='{"alignmentPeriod": "60s","perSeriesAligner": "ALIGN_MEAN"}' \
--condition-display-name='CPU Utilization >0.95 for 1m'\
--condition-filter='metric.type="compute.googleapis.com/instance/cpu/utilization" resource.type="gce_instance"' \
--duration='1m' \
--if='> 0.95' \
--display-name=' alert on spikes or consistantly high cpu' \
--combiner='OR'
gcloud alpha monitoring policies list --format='value(name,displayName)'
gcloud alpha monitoring policies conditions create \
projects/qwiklabs-gcp-04-d822dd6cd419/alertPolicies/1712202834227136574 \
--aggregation='{"alignmentPeriod": "60s","perSeriesAligner": "ALIGN_MEAN"}' \
--condition-display-name='CPU Utilization >0.80 for 10m'\
--condition-filter='metric.type="compute.googleapis.com/instance/cpu/utilization" resource.type="gce_instance"' \
--duration='10m' \
--if='> 0.80'
Duplicate --condition-filter clauses did not work for me. YMMV.
From the docs gcloud alpha monitoring policies create, it appears that you can specify repeated (!) occurrences of:
[--aggregation=AGGREGATION --condition-display-name=CONDITION_DISPLAY_NAME --condition-filter=CONDITION_FILTER --duration=DURATION --if=IF_VALUE --trigger-count=TRIGGER_COUNT | --trigger-percent=TRIGGER_PERCENT]
So I think you need to duplicate your --condition-filter with the --combiner="OR", i.e.
gcloud alpha monitoring policies create \
--display-name='Test Database State Alert ('$PROJECTID')' \
--notification-channels="x23234dfdfffffff" \
--aggregation='{"alignmentPeriod": "60s","perSeriesAligner": "ALIGN_COUNT_TRUE"}' \
--condition-display-name='RUNNABLE'\
--condition-filter='metric.type="cloudsql.googleapis.com/database/instance_state" AND resource.type="cloudsql_database" AND (metric.labels.state = "RUNNABLE")'
--duration='300s' \
--if='> 0.0' \
--trigger-count=1 \
--aggregation='{"alignmentPeriod": "60s","perSeriesAligner": "ALIGN_COUNT_TRUE"}' \
--condition-display-name='FAILED'\
--condition-filter='metric.type="cloudsql.googleapis.com/database/instance_state" AND resource.type="cloudsql_database" AND (metric.labels.state = "FAILED")' \
--duration='300s' \
--if='> 0.0' \
--trigger-count=1 \
--combiner='OR' \
--documentation='The rule "${condition.display_name}" has generated this alert for the "${metric.display_name}".' \
--project="$PROJECTID" \
--enabled

Use `capture_tpu_profile` in AI Platform

we are trying to capture TPU profiling data while running our training task on AI Platform. Following this tutorial. All needed information like TPU name getting from our model output.
config.yaml:
trainingInput:
scaleTier: BASIC_TPU
runtimeVersion: '1.15' # also tried '2.1'
task submitting command:
export DATE=$(date '+%Y%m%d_%H%M%S') && \
gcloud ai-platform jobs submit training "imaterialist_image_classification_model_${DATE}" \
--region=us-central1 \
--staging-bucket='gs://${BUCKET}' \
--module-name='efficientnet.main' \
--config=config.yaml \
--package-path="${PWD}/efficientnet" \
-- \
--data_dir='gs://${BUCKET}/tfrecords/' \
--train_batch_size=8 \
--train_steps=5 \
--model_dir="gs://${BUCKET}/algorithms_training/imaterialist_image_classification_model/${DATE}" \
--model_name='efficientnet-b4' \
--skip_host_call=true \
--gcp_project=${GCP_PROJECT_ID} \
--mode=train
When we tried to run capture_tpu_profile with name that our model got from master:
capture_tpu_profile --gcp_project="${GCP_PROJECT_ID}" --logdir='gs://${BUCKET}/algorithms_training/imaterialist_image_classification_model/20200318_005446' --tpu_zone='us-central1-b' --tpu='<tpu_IP_address>'
we got this error:
File "/home/kovtuh/.local/lib/python3.7/site-packages/tensorflow_core/python/distribute/cluster_resolver/tpu_cluster_resolver.py", line 480, in _fetch_cloud_tpu_metadata
"constructor. Exception: %s" % (self._tpu, e))
ValueError: Could not lookup TPU metadata from name 'b'<tpu_IP_address>''. Please doublecheck the tpu argument in the TPUClusterResolver constructor. Exception: <HttpError 404 when requesting https://tpu.googleapis.com/v1/projects/<GCP_PROJECT_ID>/locations/us-central1-b/nodes/<tpu_IP_address>?alt=json returned "Resource 'projects/<GCP_PROJECT_ID>/locations/us-central1-b/nodes/<tpu_IP_address>' was not found". Details: "[{'#type': 'type.googleapis.com/google.rpc.ResourceInfo', 'resourceName': 'projects/<GCP_PROJECT_ID>/locations/us-central1-b/nodes/<tpu_IP_address>'}]">
Seems like TPU device isn't connected to our project when provided in AI Platform, but what project is connected to and can we get an access to such TPUs to capture it's profile?

gcloud docker registry filter by timestamp

I'm trying to find images older than some date using the gcloud sdk. I tried
gcloud container images list-tags gcr.io/my-project/my-image --filter='timestamp < 2017-07-01'
but this gives me all images, so the filter doesn't work.
Well, that one was easier than I initially thought. This is the solution:
gcloud container images list-tags gcr.io/my-project/my-image --filter='timestamp.datetime < 2017-07-01'
--format=json showed me the right fieldname.
Another way of doing and letting you select which field values you want to obtain:
gcloud container images list-tags \
--quiet --project "${PROJECT}" "gcr.io/${PROJECT}/${IMAGE_NAME}" \
--sort-by="~timestamp" --format='get(digest)' \
--filter="timestamp.datetime < 2021-10-29"
And to avoid the confusing gcloud warning WARNING: The following filter keys were not present in any resource : timestamp.datetime, you can use this condition:
if [[ $(gcloud container images list-tags --project "${PROJECT}" "gcr.io/${PROJECT}/${IMAGE_NAME}" --format='get(digest)' | wc -l) -gt 0 ]]; then
gcloud container images list-tags \
--quiet --project "${PROJECT}" "gcr.io/${PROJECT}/${IMAGE_NAME}" \
--sort-by="~timestamp" --format='get(digest)' \
--filter="timestamp.datetime < 2021-10-29"
fi

Find Google Cloud Platform Operations Performed by a User

Is there a way to track what Google Cloud Platform operations were performed by a user? We want to audit our costs and track usage accordingly.
Edit: there's a Cloud SDK (gcloud) command:
compute operations list
that lists actions taken on Compute Engine instances. Is there a way to see what user performed these actions?
While you can't see a list of gcloud commands executed, you can see a list of API actions. gcloud beta logging surface help with listing/reading logs, but via the console it's a bit harder to use. Try checking the logs on the cloud console.
If you wish to only track Google Cloud Project (GCP) Compute Engine (GCE) operations with the list command for the operations subgroup, you are able to use the --filter flag to see operations performed by a given user $GCE_USER_NAME:
gcloud compute operations list \
--filter="user=$GCE_USER_NAME" \
--limit=1 \
--sort-by="~endTime"
#=>
NAME TYPE TARGET HTTP_STATUS STATUS TIMESTAMP
$GCP_COMPUTE_OPERATION_NAME start $GCP_COMPUTE_INSTANCE_NAME 200 DONE 1970-01-01T00:00:00.001-00:00
Note: feeding the string "~endTime" into the --sort-by flag puts the most recent GCE operation first.
It might help to retrieve the entire log object in JSON:
gcloud compute operations list \
--filter="user=$GCE_USER_NAME" \
--format=json \
--limit=1 \
--sort-by="~endTime"
#=>
[
{
"endTime": "1970-01-01T00:00:00.001-00:00",
. . .
"user": "$GCP_COMPUTE_USER"
}
]
or YAML:
gcloud compute operations list \
--filter="user=$GCE_USER_NAME" \
--format=yaml \
--limit=1 \
--sort-by="~endTime"
#=>
---
endTime: '1970-01-01T00:00:00.001-00:00'
. . .
user: $GCP_COMPUTE_USER
You are also able to use the Cloud SDK (gcloud) to explore all audit logs, not just audit logs for GCE; it is incredibly clunky, as the other existing answer points out. However, for anyone who wants to use gcloud instead of the console:
gcloud logging read \
'logName : "projects/$GCP_PROJECT_NAME/logs/cloudaudit.googleapis.com"
protoPayload.authenticationInfo.principalEmail="GCE_USER_NAME"
severity>=NOTICE' \
--freshness="1d" \
--limit=1 \
--order="desc" \
--project=$GCP_PROJECT_NAME
#=>
---
insertId: . . .
. . .
protoPayload:
'#type': type.googleapis.com/google.cloud.audit.AuditLog
authenticationInfo:
principalEmail: $GCP_COMPUTE_USER
. . .
. . .
The read command defaults to YAML format, but you can also get your audit logs in JSON:
gcloud logging read \
'logName : "projects/$GCP_PROJECT_NAME/logs/cloudaudit.googleapis.com"
protoPayload.authenticationInfo.principalEmail="GCE_USER_NAME"
severity>=NOTICE' \
--format=json \
--freshness="1d" \
--limit=1 \
--order="desc" \
--project=$GCP_PROJECT_NAME
#=>
[
{
. . .
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "$GCE_USER_NAME"
},
. . .
},
. . .
}
]