I am following instructions both here - https://docs.aws.amazon.com/cli/latest/reference/rekognition/detect-custom-labels.html and on the AWS Console itself in order to test recognition against a model / dataset I have built using custom labels. The console advises on using the aws CLI to make requests against your model, however when I try the suggested commands, specifically
PS C:\Users\james> aws rekognition start-project-version
And
PS C:\Users\james> aws rekognition detect-custom-labels
I get the error:
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws.exe: error: argument operation: Invalid choice, valid choices are:
compare-faces | create-collection
create-stream-processor | delete-collection
delete-faces | delete-stream-processor
describe-stream-processor | detect-faces
detect-labels | detect-moderation-labels
detect-text | get-celebrity-info
get-celebrity-recognition | get-content-moderation
get-face-detection | get-face-search
get-label-detection | get-person-tracking
index-faces | list-collections
list-faces | list-stream-processors
recognize-celebrities | search-faces
search-faces-by-image | start-celebrity-recognition
start-content-moderation | start-face-detection
start-face-search | start-label-detection
start-person-tracking | start-stream-processor
stop-stream-processor | help
My first thought was that my CLI was out of date. I updated it, and the version is now:
PS C:\Users\james> aws --version
aws-cli/1.14.53 Python/2.7.9 Windows/8 botocore/1.9.6
PS C:\Users\james>
Yet still these commands for rekognition custom labels / projects do not appear. Where am I going wrong here? :/
EDIT: Updated CLI, which lets me run the commend, but now I get this error:
Command:
aws rekognition detect-custom-labels --project-version-arn "arn:aws:rekognition:us-west-2:xxxxxxxxxxxxxxx:project/api-dev-rtest/version/api-dev-rtest.2019-12-07T16.35.53/xxxxxxxxxxxxxx" --image "{"S3Object": {"Bucket": "xxxxxxxxxxxxx","Name": "James/yes.JPG"}}" --endpoint-url https://rekognition.us-west-2.amazonaws.com --region us-west-2
Error:
Unknown options: S3Object: {Bucket: xxxxxxxxxxxxx,Name: James/yes.JPG}}
Try to put --image parameter into single quotes:
... --image '{"S3Object": {"Bucket": "xxxxxxxxxxxxx","Name": "James/yes.JPG"}}'
aws rekognition detect-custom-labels --project-version-arn "arn:aws:rekognition:us-west-2:xxxxxxxxxxxxxxx:project/api-dev-rtest/version/api-dev-rtest.2019-12-07T16.35.53/xxxxxxxxxxxxxx" --image '{"S3Object": {"Bucket": "xxxxxxxxxxxxx","Name": "James/yes.JPG"}}' --region us-west-2
You need to update your boto3 version to 1.10.34,
try to use command $sudo pip install --upgrade boto3
Related
I am managing a Azure Container Registry. I have scheduled a ACR Purge task which is deleting all image tag if they are older than 7 days and exclude versioned images which are starting with v so that we can skip certain image from cleanup.
For Example: if image has name like
123abc
v1.2
v1.3
xit5424
v1.4
34xyurc
v2.1
So it should delete images which are not starting with v and should delete the images which are not starting with v. For example it should delete below images-
123abc
xit5424
34xyurc
My script is something like this.
PURGE_CMD="acr purge --filter 'Repo1:.' --filter 'ubuntu:.' --ago 7d --untagged --keep 5"
az acr run
--cmd "$PURGE_CMD"
--registry Myregistry
/dev/null
Thanks Ashish
Please check if below gives an idea to workaround :
Here I am trying to make use of delete command .
grep -v >>Invert the sense of matching, to select non-matching lines.
Grep -o >> Show only the part of a matching line that matches PATTERN.
grep - Reference
1)Try to get the tags which does not match the pattern "v"
$tagsArray = az acr repository show-tags --name myacr --repository myrepo --orderby time_desc \
--output tsv | grep -v "v"
Check with purge command below if possible (not tested)
PURGE_CMD="acr purge --filter 'Repo1:.' --filter 'ubuntu:.' --ago 7d --filter '$tagsArray' --untagged --keep 5"
az acr run --cmd "$PURGE_CMD" --registry Myregistry /dev/null
(or)
check by using delete command
Ex:
$repositoryList = (az acr repository list --name $registryName --output json | ConvertFrom-Json)
foreach ($repo in $repositoryList)
{
$tagsArray = az acr repository show-tags --name myacr --repository myrepo --orderby time_desc \
--output tsv | grep -v "v"
foreach($tag in $tagsArray)
{
az acr repository delete --name $registryName --image $repo":"$tag --yes
}
}
Or we can get all tags with a query which should not be deleted and can use if else statement tag .
foreach ($repo in $repositoryList)
{
$AllTags = (az acr repository show-tags --name $registryName --repository $repo --orderby time_asc --output json | ConvertFrom-Json ) | Select-Object -SkipLast $skipLastTags
$doNotDeleteTags=$( az acr repository show-tags --name $registryName --query "[?contains(name, 'tagname')]" --output tsv)
#or $doNotDeleteTags = az acr repository show-tags --name $registryName --repository $repo --orderby time_asc --output json | ConvertFrom-Json ) -- query "[?starts_with(name,'prefix')].name"
foreach($tag in $AllTags)
{
if ($donotdeletetags -contains $tag)
{
Write-Output ("This tag is not deleted $tag")
}
else
{
az acr repository delete --name $registryName --image $repo":"$tag --yes
}
}
}
References:
fetch-the-latest-image-from-acr-that-doesnt-start-with-a-prefix
azure-container-registry-delete
how-to-delete-image-from-azure-container-registry
acr-delete-only-old-images-
I would like to update the cloudfront distribution with the latest lambda#edge function using CLI.
I saw this documentation https://docs.aws.amazon.com/cli/latest/reference/cloudfront/update-distribution.html
but could not figure out how to update the lambda arn only.
Can some one help
Here is the script, that is doing exactly that. It is implemented based on #cloudbud answer. There is no argument checking. It would be executed like this: ./script QF234ASD342FG my-lambda-at-edge-function us-east-1. In my case, the execution time is less than 10 sec. See update-distribution for details.
#!/bin/bash
set -xeuo pipefail
export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin
distribution_id="$1"
function_name="$2"
region="$3"
readonly lambda_arn=$(
aws lambda list-versions-by-function \
--function-name "$function_name" \
--region "$region" \
--query "max_by(Versions, &to_number(to_number(Version) || '0'))" \
| jq -r '.FunctionArn'
)
readonly tmp1=$(mktemp)
readonly tmp2=$(mktemp)
aws cloudfront get-distribution-config \
--id "$distribution_id" \
> "$tmp1"
readonly etag=$(jq -r '.ETag' < "$tmp1")
cat "$tmp1" \
| jq '(.DistributionConfig.CacheBehaviors.Items[] | select(.PathPattern=="dist/sxf/*") | .LambdaFunctionAssociations.Items[] | select(.EventType=="origin-request") | .LambdaFunctionARN ) |= "'"$lambda_arn"'"' \
| jq '.DistributionConfig' \
> "$tmp2"
# the dist config has to be in the file
# and be referred in specific way.
aws cloudfront update-distribution \
--id "$distribution_id" \
--distribution-config "file://$tmp2" \
--if-match "$etag"
rm -f "$tmp1" "$tmp2"
could not figure out how to update the lambda arn only.
The link that you provided explains the process:
The update process includes getting the current distribution configuration, updating the XML document that is returned to make your changes, and then submitting an UpdateDistribution request to make the updates.
This means that you can't just update lambda arn directly. You have:
Call get-distribution-config to obtain full current configuration.
Change the lambda arn in the configuration data obtained.
Upload the entire new configuration using update-distribution.
The process requires extra attention which is also explained in the docs under Warning:
You must strip out the ETag parameter that is returned.
Additional fields are required when you update a distribution.
and more.
The process is indeed complex. Thus if you can I would recommend trying this on some test/dummy CloudFront distribution rather than directly on the production version.
Something like this:
#!/bin/bash
set -x
TEMPDIR=$(mktemp -d)
CONFIG=$(aws cloudfront get-distribution-config --id CGSKSKLSLSM)
ETAG=$(echo "${CONFIG}" | jq -r '.ETag')
echo "${CONFIG}" | jq '.DistributionConfig' > ${TEMPDIR}/orig.json
echo "${CONFIG}" | jq '.DistributionConfig | .DefaultCacheBehavior.LambdaFunctionAssociations.Items[0].LambdaFunctionARN= "arn:aws:lambda:us-east-1:xxxxx:function:test-func:3"' > ${TEMPDIR}/updated.json
aws cloudfront update-distribution --id CGSKSKLSLSM --distribution-config file://${TEMPDIR}/updated.json --if-match "${ETAG}"
Lets say "foo" is the repository name and I want to call the image which has two tags "boo, boo-0011"
This command displays all the images in the repository:
aws ecr describe-images --repository-name foo --query "sort_by(imageDetails,& imagePushedAt)[ * ].imageTags[ * ]"
From this how do I grep only the one which has a tag "boo"
You can use --filter tagStatus=xxx but that only allows you to filter on TAGGED or UNTAGGED images, not images with a specific tag.
To find images with a specific tag, say boo, you should be able to use the somewhat inscrutable, but very helpful, jq utility. For example:
aws ecr describe-images \
--region us-east-1 \
--repository-name foo \
--filter tagStatus=TAGGED \
| jq -c '.imageDetails[] | select([.imageTags[] == "boo"] | any)'
Personally I use grep for this
aws ecr describe-images --repository-name foo --query "sort_by(imageDetails,& imagePushedAt)[ * ].imageTags[ * ]" | grep -w 'boo'
-w is the grep command for the whole word matching.
I have a Jenkins 2.0 job where I require the user to select the list of servers to execute the job against via a Jenkins "Active Choices Reactive Parameter". These servers which the job will execute against are AWS EC2 instances. Instead of hard-coding the available servers in the "Active Choices Reactive Parameter", I'd like to query the AWS CLI to get a list of servers.
A few notes:
I've assigned the Jenkins 2.0 EC2 an IAM role which has sufficient privileges to query AWS via the CLI.
The AWS CLI is installed on the Jenkins EC2.
The "Active Choices Reactive Parameter" will return a list of checkboxes if I hardcode values in a Groovy script, such as:
return ["10.1.1.1", "10.1.1.2", 10.1.1.3"]
I know my awk commands can be improved, I'm not yet sure how, but my primary goal is to get the list of servers dynamically loaded in Jenkins.
I can run the following command directly on the EC2 instance which is hosting Jenkins:
aws ec2 describe-instances --region us-east-2 --filters
"Name=tag:Env,Values=qa" --query
"Reservations[*].Instances[*].PrivateIpAddress" | grep -o
'\"[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\"' | awk
{'printf $0 ", "'} | awk {'print "[" $0'} | awk {'gsub(/^[ \t]+|[
\t]+$/, ""); print'} | awk {'print substr ($0, 1, length($0)-1)'} |
awk {'print $0 "]"'}
This will return the following, which is in the format expected by the "Active Choices Reactive Parameter":
["10.1.1.1", "10.1.1.2", 10.1.1.3"]
So, in the "Script" textarea of the "Active Choices Reactive Parameter", I have the following script. The problem is that my server list is never populated. I've tried numerous variations of this script without luck. Can someone please tell me where I've went wrong and what I can do to correct this script so that my list of server IP addresses is dynamically loaded into a Jenkins job?
def standardOut = new StringBuffer(), standardErr = new StringBuffer()
def command = $/
aws ec2 describe-instances --region us-east-2 --filters "Name=tag:Env,Values=qaint" --query "Reservations[*].Instances[*].PrivateIpAddress" |
grep -o '\"[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\"' |
awk {'printf $0 ", "'} |
awk {'print "[" $0'} |
awk {'gsub(/^[ \t]+|[ \t]+$/, ""); print'} |
awk {'print substr ($0, 1, length($0)-1)'} |
awk {'print $0 "]"'}
/$
def proc = command.execute()
proc.consumeProcessOutput(standardOut, standardErr)
proc.waitForOrKill(1000)
return standardOut
I tried to execute your script and the standardErr had some errors, Looks like groovy didn't like the double quotes in the AWS CLI. Here is a cleaner way to do without using awk
def command = 'aws ec2 describe-instances \
--filters Name=tag:Name,Values=Test \
--query Reservations[*].Instances[*].PrivateIpAddress \
--output text'
def proc = command.execute()
proc.waitFor()
def output = proc.in.text
def exitcode= proc.exitValue()
def error = proc.err.text
if (error) {
println "Std Err: ${error}"
println "Process exit code: ${exitcode}"
return exitcode
}
//println output.split()
return output.split()
This script works with Jenkins Active Choices Parameter, and returns the list of IP addresses:
def aws_cmd = 'aws ec2 describe-instances \
--filters Name=instance-state-name,Values=running \
Name=tag:env,Values=dev \
--query Reservations[].Instances[].PrivateIpAddress[] \
--region us-east-2 \
--output text'
def aws_cmd_output = aws_cmd.execute()
// probably is required if execution takes long
//aws_cmd_output.waitFor()
def ip_list = aws_cmd_output.text.tokenize()
return ip_list
Using AWS CLI, and jq if needed, I'm trying to get the tag of the newest image in a particular repo.
Let's call the repo foo, and say the latest image is tagged bar. What query do I use to return bar?
I got as far as
aws ecr list-images --repository-name foo
and then realized that the list-images documentation gives no reference to the date as a queryable field. Sticking the above in a terminal gives me keypairs with just the tag and digest, no date.
Is there still some way to get the "latest" image? Can I assume it'll always be the first, or the last in the returned output?
You can use describe-images instead.
aws ecr describe-images --repository-name foo
returns imagePushedAt which is a timestamp property which you can use to filter.
I dont have examples in my account to test with but something like following should work
aws ecr describe-images --repository-name foo \
--query 'sort_by(imageDetails,& imagePushedAt)[*]'
If you want another flavor of using sort method, you can review this post
To add to Frederic's answer, if you want the latest, you can use [-1]:
aws ecr describe-images --repository-name foo \
--query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]'
Assuming you are using a singular tag on your images... otherwise you might need to use imageTags[*] and do a little more work to grab the tag you want.
To get only latest image with out special character minor addition required for above answer.
aws ecr describe-images --repository-name foo --query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]' --output text
List latest 3 images pushed to ECR
aws ecr describe-images --repository-name gvh \
--query 'sort_by(imageDetails,& imagePushedAt)[*].imageTags[0]' --output yaml \
| tail -n 3 | awk -F'- ' '{print $2}'
List first 3 images pushed to ECR
aws ecr describe-images --repository-name gvh \
--query 'sort_by(imageDetails,& imagePushedAt)[*].imageTags[0]' --output yaml \
| head -n 3 | awk -F'- ' '{print $2}'
Number '3' can be generalized in either head or tail command based on user requirement
Without having to sort the results, you can filter them specifying the imageTag=latest on image-ids, like so:
aws ecr describe-images --repository-name foo --image-ids imageTag=latest --output text
This will return only one result with the newest image, which is the one tagged as latest
Some of the provided solutions will fail because:
There is no image tagged with 'latest'.
There are multiple tags available, eg. [1.0.0, 1.0.9, 1.0.11]. With a sort_by this will return 1.0.9. Which is not the latest.
Because of this it's better to check for the image digest.
You can do so with this simple bash script:
#!/bin/bash -
#===============================================================================
#
# FILE: get-latest-image-per-ecr-repo.sh
#
# USAGE: ./get-latest-image-per-ecr-repo.sh aws-account-id
#
# AUTHOR: Enri Peters (EP)
# CREATED: 04/07/2022 12:59:15
#=======================================================================
set -o nounset # Treat unset variables as an error
for repo in \
$(aws ecr describe-repositories |\
jq -r '.repositories[].repositoryArn' |\
sort -u |\
awk -F ":" '{print $6}' |\
sed 's/repository\///')
do
echo "$1.dkr.ecr.eu-west-1.amazonaws.com/${repo}#$(aws ecr describe-images\
--repository-name ${repo}\
--query 'sort_by(imageDetails,& imagePushedAt)[-1].imageDigest' |\
tr -d '"')"
done > latest-image-per-ecr-repo-${1}.list
The output will be written to a file named latest-image-per-ecr-repo-awsaccountid.list.
An example of this output could be:
123456789123.dkr.ecr.eu-west-1.amazonaws.com/your-ecr-repository-name#sha256:fb839e843b5ea1081f4bdc5e2d493bee8cf8700458ffacc67c9a1e2130a6772a
...
...
With this you can do something like below to pull all the images to your machine.
#!/bin/bash -
for image in $(cat latest-image-per-ecr-repo-353131512553.list)
do
docker pull $image
done
You will see that when you run docker images that none of the images are tagged. But you can 'fix' this by running these commands:
docker images --format "docker image tag {{.ID}} {{.Repository}}:latest" > tag-images.sh
chmod +x tag-images.sh
./tag-images.sh
Then they will all be tagged with latest on your machine.
To get the latest image tag use:-
aws ecr describe-images --repository-name foo --query 'imageDetails[*].imageTags[ * ]' --output text | sort -r | head -n 1