gcloud docker registry filter by timestamp - google-cloud-platform

I'm trying to find images older than some date using the gcloud sdk. I tried
gcloud container images list-tags gcr.io/my-project/my-image --filter='timestamp < 2017-07-01'
but this gives me all images, so the filter doesn't work.

Well, that one was easier than I initially thought. This is the solution:
gcloud container images list-tags gcr.io/my-project/my-image --filter='timestamp.datetime < 2017-07-01'
--format=json showed me the right fieldname.

Another way of doing and letting you select which field values you want to obtain:
gcloud container images list-tags \
--quiet --project "${PROJECT}" "gcr.io/${PROJECT}/${IMAGE_NAME}" \
--sort-by="~timestamp" --format='get(digest)' \
--filter="timestamp.datetime < 2021-10-29"
And to avoid the confusing gcloud warning WARNING: The following filter keys were not present in any resource : timestamp.datetime, you can use this condition:
if [[ $(gcloud container images list-tags --project "${PROJECT}" "gcr.io/${PROJECT}/${IMAGE_NAME}" --format='get(digest)' | wc -l) -gt 0 ]]; then
gcloud container images list-tags \
--quiet --project "${PROJECT}" "gcr.io/${PROJECT}/${IMAGE_NAME}" \
--sort-by="~timestamp" --format='get(digest)' \
--filter="timestamp.datetime < 2021-10-29"
fi

Related

How to get latest version of an image from artifact registry

is there a command (gcloud) that return the latest fully qualified name of an image from Artifact registry
Try:
PROJECT=
REGION=
REPO=
IMAGE=
gcloud artifacts docker images list \
${REGION}-docker.pkg.dev/${PROJECT}/${REPO} \
--filter="package=${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}" \
--sort-by="~UPDATE_TIME" \
--limit=1 \
--format="value(format("{0}#{1}",package,version))"
Because:
Filters the list for a specific image
Sorts the results descending (~) by UPDATE_TIME1
Only takes 1 value i.e. the most recent
Outputs the results as {package}#{version}
1 -- Curiously, --sort-by uses the output (!) field name not the underlying type (surfaced by e.g. --format=json or --format=yaml) name.
Many thanks to the previous answer, I use it to remove the tag "latest" of my last pushed artifact. I then add it when I push another. Leaving here if anyone interested.
Doc : https://cloud.google.com/artifact-registry/docs/docker/manage-images#tag
Remove tag :
gcloud artifacts docker tags delete \
$(gcloud artifacts docker images list ${REGION}-docker.pkg.dev/\
${PROJECT}/${REPO}/${IMAGE}/\
--filter="package=${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}"\
--sort-by="~UPDATE_TIME" --limit=1 --format="value(format("{0}",package))"):latest
Add tag:
gcloud artifacts docker tags add \
$(gcloud artifacts docker images list \
${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}/ \
--filter="package=${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}" \
--sort-by="~UPDATE_TIME" --limit=1 \
--format="value(format("{0}#{1}",package,version))") \
$(gcloud artifacts docker images list \
${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}/ \
--filter="package=${REGION}-docker.pkg.dev/${PROJECT}/${REPO}/${IMAGE}" \
--sort-by="~UPDATE_TIME" --limit=1 \
--format="value(format("{0}",package))"):latest

How to escape slash in gcloud format / filter command?

I would like to filter Cloud Run revisions by its container image.
When I run this gcloud run revisions command,
gcloud beta run revisions list --service sample-service --region=asia-northeast1 --limit=5 --sort-by="~DEPLOYED" --format="json"
it will output following json
[
{
"apiVersion": "serving.knative.dev/v1",
"kind": "Revision",
"metadata": {
"annotations": {
"autoscaling.knative.dev/maxScale": "1",
"client.knative.dev/user-image": "asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1",
"run.googleapis.com/client-name": "gcloud",
"run.googleapis.com/client-version": "383.0.1", #
I tried to filter revisions by --filter options, but it raises an error.
gcloud beta run revisions list --service it-sys-watch --region=asia-northeast1 --limit=1 --sort-by="~DEPLOYED" --filter='metadata.annotations.client.knative.dev/user-image=asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1'
ERROR: (gcloud.beta.run.revisions.list) Non-empty key name expected [metadata.annotations.client.knative.dev *HERE* /user-image=asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1].
Neither adding backslash nor double slashes won't work
gcloud beta run revisions list --service it-sys-watch --region=asia-northeast1 --limit=1 --sort-by="~DEPLOYED" --filter='metadata.annotations.client.knative.dev\/user-image=asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1'
WARNING: The following filter keys were not present in any resource : metadata.annotations.client.knative.dev\/user-image
Listed 0 items.
gcloud beta run revisions list --service it-sys-watch --region=asia-northeast1 --limit=1 --sort-by="~DEPLOYED" --filter='metadata.annotations.client.knative.dev//user-image=asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1'
ERROR: (gcloud.beta.run.revisions.list) Non-empty key name expected [metadata.annotations.client.knative.dev *HERE* //user-image=asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1].
gcloud --format options also does not work with backslash keys.
Is there any idea to help filtering key with slashes?
Try:
gcloud beta run revisions list \
--service=it-sys-watch \
--region=asia-northeast1 \
--sort-by="~DEPLOYED" \
--filter='metadata.annotations["client.knative.dev/user-image"]="asia.gcr.io/sample-gcp-project/sample-app:e88597bcfb346aa1"'
NOTE You need to drop the --limit=1 too though this conflicts with the documentation that suggests that limit is applied after filter
gcloud ... --filter=... --limit=1 | jq 'length' yields 0
gcloud ... --filter=... | jq 'length' yields 1
Let's see what Google Engineering says: 231192444

AWS CLI, List ECR image which I specify with tags

Lets say "foo" is the repository name and I want to call the image which has two tags "boo, boo-0011"
This command displays all the images in the repository:
aws ecr describe-images --repository-name foo --query "sort_by(imageDetails,& imagePushedAt)[ * ].imageTags[ * ]"
From this how do I grep only the one which has a tag "boo"
You can use --filter tagStatus=xxx but that only allows you to filter on TAGGED or UNTAGGED images, not images with a specific tag.
To find images with a specific tag, say boo, you should be able to use the somewhat inscrutable, but very helpful, jq utility. For example:
aws ecr describe-images \
--region us-east-1 \
--repository-name foo \
--filter tagStatus=TAGGED \
| jq -c '.imageDetails[] | select([.imageTags[] == "boo"] | any)'
Personally I use grep for this
aws ecr describe-images --repository-name foo --query "sort_by(imageDetails,& imagePushedAt)[ * ].imageTags[ * ]" | grep -w 'boo'
-w is the grep command for the whole word matching.

Find Google Cloud Platform Operations Performed by a User

Is there a way to track what Google Cloud Platform operations were performed by a user? We want to audit our costs and track usage accordingly.
Edit: there's a Cloud SDK (gcloud) command:
compute operations list
that lists actions taken on Compute Engine instances. Is there a way to see what user performed these actions?
While you can't see a list of gcloud commands executed, you can see a list of API actions. gcloud beta logging surface help with listing/reading logs, but via the console it's a bit harder to use. Try checking the logs on the cloud console.
If you wish to only track Google Cloud Project (GCP) Compute Engine (GCE) operations with the list command for the operations subgroup, you are able to use the --filter flag to see operations performed by a given user $GCE_USER_NAME:
gcloud compute operations list \
--filter="user=$GCE_USER_NAME" \
--limit=1 \
--sort-by="~endTime"
#=>
NAME TYPE TARGET HTTP_STATUS STATUS TIMESTAMP
$GCP_COMPUTE_OPERATION_NAME start $GCP_COMPUTE_INSTANCE_NAME 200 DONE 1970-01-01T00:00:00.001-00:00
Note: feeding the string "~endTime" into the --sort-by flag puts the most recent GCE operation first.
It might help to retrieve the entire log object in JSON:
gcloud compute operations list \
--filter="user=$GCE_USER_NAME" \
--format=json \
--limit=1 \
--sort-by="~endTime"
#=>
[
{
"endTime": "1970-01-01T00:00:00.001-00:00",
. . .
"user": "$GCP_COMPUTE_USER"
}
]
or YAML:
gcloud compute operations list \
--filter="user=$GCE_USER_NAME" \
--format=yaml \
--limit=1 \
--sort-by="~endTime"
#=>
---
endTime: '1970-01-01T00:00:00.001-00:00'
. . .
user: $GCP_COMPUTE_USER
You are also able to use the Cloud SDK (gcloud) to explore all audit logs, not just audit logs for GCE; it is incredibly clunky, as the other existing answer points out. However, for anyone who wants to use gcloud instead of the console:
gcloud logging read \
'logName : "projects/$GCP_PROJECT_NAME/logs/cloudaudit.googleapis.com"
protoPayload.authenticationInfo.principalEmail="GCE_USER_NAME"
severity>=NOTICE' \
--freshness="1d" \
--limit=1 \
--order="desc" \
--project=$GCP_PROJECT_NAME
#=>
---
insertId: . . .
. . .
protoPayload:
'#type': type.googleapis.com/google.cloud.audit.AuditLog
authenticationInfo:
principalEmail: $GCP_COMPUTE_USER
. . .
. . .
The read command defaults to YAML format, but you can also get your audit logs in JSON:
gcloud logging read \
'logName : "projects/$GCP_PROJECT_NAME/logs/cloudaudit.googleapis.com"
protoPayload.authenticationInfo.principalEmail="GCE_USER_NAME"
severity>=NOTICE' \
--format=json \
--freshness="1d" \
--limit=1 \
--order="desc" \
--project=$GCP_PROJECT_NAME
#=>
[
{
. . .
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "$GCE_USER_NAME"
},
. . .
},
. . .
}
]

AWS CLI: ECR list-images, get newest

Using AWS CLI, and jq if needed, I'm trying to get the tag of the newest image in a particular repo.
Let's call the repo foo, and say the latest image is tagged bar. What query do I use to return bar?
I got as far as
aws ecr list-images --repository-name foo
and then realized that the list-images documentation gives no reference to the date as a queryable field. Sticking the above in a terminal gives me keypairs with just the tag and digest, no date.
Is there still some way to get the "latest" image? Can I assume it'll always be the first, or the last in the returned output?
You can use describe-images instead.
aws ecr describe-images --repository-name foo
returns imagePushedAt which is a timestamp property which you can use to filter.
I dont have examples in my account to test with but something like following should work
aws ecr describe-images --repository-name foo \
--query 'sort_by(imageDetails,& imagePushedAt)[*]'
If you want another flavor of using sort method, you can review this post
To add to Frederic's answer, if you want the latest, you can use [-1]:
aws ecr describe-images --repository-name foo \
--query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]'
Assuming you are using a singular tag on your images... otherwise you might need to use imageTags[*] and do a little more work to grab the tag you want.
To get only latest image with out special character minor addition required for above answer.
aws ecr describe-images --repository-name foo --query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]' --output text
List latest 3 images pushed to ECR
aws ecr describe-images --repository-name gvh \
--query 'sort_by(imageDetails,& imagePushedAt)[*].imageTags[0]' --output yaml \
| tail -n 3 | awk -F'- ' '{print $2}'
List first 3 images pushed to ECR
aws ecr describe-images --repository-name gvh \
--query 'sort_by(imageDetails,& imagePushedAt)[*].imageTags[0]' --output yaml \
| head -n 3 | awk -F'- ' '{print $2}'
Number '3' can be generalized in either head or tail command based on user requirement
Without having to sort the results, you can filter them specifying the imageTag=latest on image-ids, like so:
aws ecr describe-images --repository-name foo --image-ids imageTag=latest --output text
This will return only one result with the newest image, which is the one tagged as latest
Some of the provided solutions will fail because:
There is no image tagged with 'latest'.
There are multiple tags available, eg. [1.0.0, 1.0.9, 1.0.11]. With a sort_by this will return 1.0.9. Which is not the latest.
Because of this it's better to check for the image digest.
You can do so with this simple bash script:
#!/bin/bash -
#===============================================================================
#
# FILE: get-latest-image-per-ecr-repo.sh
#
# USAGE: ./get-latest-image-per-ecr-repo.sh aws-account-id
#
# AUTHOR: Enri Peters (EP)
# CREATED: 04/07/2022 12:59:15
#=======================================================================
set -o nounset # Treat unset variables as an error
for repo in \
$(aws ecr describe-repositories |\
jq -r '.repositories[].repositoryArn' |\
sort -u |\
awk -F ":" '{print $6}' |\
sed 's/repository\///')
do
echo "$1.dkr.ecr.eu-west-1.amazonaws.com/${repo}#$(aws ecr describe-images\
--repository-name ${repo}\
--query 'sort_by(imageDetails,& imagePushedAt)[-1].imageDigest' |\
tr -d '"')"
done > latest-image-per-ecr-repo-${1}.list
The output will be written to a file named latest-image-per-ecr-repo-awsaccountid.list.
An example of this output could be:
123456789123.dkr.ecr.eu-west-1.amazonaws.com/your-ecr-repository-name#sha256:fb839e843b5ea1081f4bdc5e2d493bee8cf8700458ffacc67c9a1e2130a6772a
...
...
With this you can do something like below to pull all the images to your machine.
#!/bin/bash -
for image in $(cat latest-image-per-ecr-repo-353131512553.list)
do
docker pull $image
done
You will see that when you run docker images that none of the images are tagged. But you can 'fix' this by running these commands:
docker images --format "docker image tag {{.ID}} {{.Repository}}:latest" > tag-images.sh
chmod +x tag-images.sh
./tag-images.sh
Then they will all be tagged with latest on your machine.
To get the latest image tag use:-
aws ecr describe-images --repository-name foo --query 'imageDetails[*].imageTags[ * ]' --output text | sort -r | head -n 1