Using gcloud I can list and remove the images I want through those commands:
gcloud artifacts docker images list LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE \
--include-tags --filter="tags:IPLA*" --filter="create_time>2022-04-20T00:00:00"
and then
gcloud artifacts docker images delete LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE:tag
I am trying to automate that so I can filter by tag name and date and run every day or week.
I've tried to use inside a cloud function, but I don't think that is allowed.
const { spawn } = require("child_process");
const listening = spawn('gcloud', ['artifacts', 'docker', 'images', 'list',
'LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE',
'--include-tags',
'--filter="tags:IPLA*"',
'--filter="create_time>2022-04-20T00:00:00"'
]);
listening.stdout.on("data", data => {
console.log(`stdout: ${data}`);
});
listening.stderr.on("data", data => {
console.log(`stderr: ${data}`);
});
listening.on('error', (error) => {
console.log(`error: ${error.message}`);
});
I get this error when running the cloud function:
error: spawn gcloud ENOENT
I accept any other solution like trigger on cloud build, terraform as long is it can live on google cloud.
You use Cloud Functions, a serverless product where you deploy your code that run somewhere, on something that you don't manage.
Here, in your code, you assume that gcloud is installed in the runtime. It's a mistake, you can't perform that assumption (that is wrong!)
However, you can use another serverless product where you manage your runtime environemnt: Cloud Run. The principle is to create your container (and therefore install what you want in it) and then deploy it. That time you can use gcloud command, because you know it exists on the VM.
However, it's not the right option. You have 2 better things
First of all, use something already done for you by a Google Cloud Developer Advocate (Seth Vargo). It's named GCR cleaner and remove images older than something
Or you can use directly the API to perform the exact same operation than GCLOUD bur without gcloud, by invoking the Artifact registry REST API. If you want to cheat and go faster, you can use the gcloud command with the --log-http parameter to display all the API call performed by the CLI. Copy the URL and parameters, and enjoy!!
Initially I started to look in the solution suggested by Guillaume, though it looked too overkill deploying a whole image just to clean the Artifact Registry. Ended up finding a lighter approach.
I create a shell script file to clean the images with the filters I wanted:
#!/usr/bin/env bash
_cleanup() {
image_path="$location-docker.pkg.dev/$project_id/$repository_id/$image_name"
echo "Starting to filter: $image_path"
tags=$(gcloud artifacts docker images list $image_path \
--include-tags \
--filter="tags:IPLA* AND UPDATE_TIME.date('%Y-%m-%d', Z)<=$(date --date="-$older_than_days days" +'%Y-%m-%d')" \
--format='value(TAGS)')
if [ -z "$tags" ]; then
echo "No images to clean"
else
echo "Images found: $tags"
for tag in $tags; do
echo "Deleting image: $image_path:$tag"
gcloud artifacts docker images delete "$image_path:$tag" --quiet
done
fi
}
location=$1
project_id=$2
repository_id=$3
image_name=$4 #In this case I just want to clean the old branchs for same image
older_than_days=$5 #7 - Number of days in the repository
_cleanup
echo
echo "DONE"
Then I created a scheduled trigger on Cloud Build for the following cloudbuild.yaml file:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: Clean up older versions
entrypoint: 'bash'
args: [ 'cleanup-old-images.sh', '$_LOCATION', '$PROJECT_ID','$_REPOSITORY_ID', '$_IMAGE_NAME', '$_OLDER_THAN_DAYS' ]
timeout: 1200s
##!/usr/bin/env bash
_cleanup() {
image_path="$2-docker.pkg.dev/$project_id/$1"
echo "Starting to filter: $image_path"
images=$(gcloud artifacts docker images list $image_path \
--filter="UPDATE_TIME.date('%Y-%m-%d', Z)<=$(date --date="-1 years" +'%Y-%m-%d')" \
--format='value(IMAGE)')
if [ -z "$images" ]; then
echo "No images to clean"
else
echo "Images found: $images"
for each in $images; do
echo "Deleting image: $image_path:$each"
gcloud artifacts docker images delete "$images" --quiet
done
fi
}
project_id=$1
gcloud artifacts repositories list --format="value(REPOSITORY,LOCATION)" --project=$project_id | tee -a repo.txt
while read p; do
sentence=$p
stringarray=($sentence)
_cleanup ${stringarray[0]} ${stringarray[1]}
done < repo.txt
echo
echo "DONE"
rm -rf repo.txt
echo "Deleteing repo.txt file"
Related
I have been playing around with AWS Batch, and I am having some trouble understanding why everything work when I build a docker image from my local windows machine and push it to ECR, while it doesn't work when I do this from a ubuntu EC2 instance.
What I show below is adapted from this tutorial.
The docker file is very simple:
FROM python:3.6.10-alpine
RUN apk add --no-cache --upgrade bash
COPY ./ /usr/local/aws_batch_tutorial
RUN pip3 install -r /usr/local/aws_batch_tutorial/requirements.txt
WORKDIR /usr/local/aws_batch_tutorial
Where the local folder contains the following bash script (run_job.sh):
#!/bin/bash
error_exit () {
echo "${BASENAME} - ${1}" >&2
exit 1
}
################################################################################
###### Convert envinronment variables to command line arguments ########
pat="--([^ ]+).+"
arg_list=""
while IFS= read -r line; do
# Check if line contains a command line argument
if [[ $line =~ $pat ]]; then
E=${BASH_REMATCH[1]}
# Check that a matching environmental variable is declared
if [[ ! ${!E} == "" ]]; then
# Make sure argument isn't already include in argument list
if [[ ! ${arg_list} =~ "--${E}=" ]]; then
# Add to argument list
arg_list="${arg_list} --${E}=${!E}"
fi
fi
fi
done < <(python3 script.py --help)
################################################################################
python3 -u script.py ${arg_list} | tee "${save_name}.txt"
aws s3 cp "./${save_name}.p" "s3://bucket/${save_name}.p" || error_exit "Failed to upload results to s3 bucket."
aws s3 cp "./${save_name}.txt" "s3://bucket/logs/${save_name}.txt" || error_exit "Failed to upload logs to s3 bucket."
It also contains a requirement.txt file with a three packages (awscli,boto3,botocore),
and a dummy python script (script.py) that simply lists the files in a s3 bucket and saves the list in a file that is then uploaded to s3.
Both in my local windows environment and in the EC2 instance I have set up my AWS credentials with aws configure, and in both cases I can successfully build the image, tag it and push it to ECR.
The problem arises when I submit the job on AWS Batch, which should run the ECR container using the command ["./run_job.sh"]:
if AWS Batch uses the ECR image pushed from windows, everything works fine
if it uses the image pushed from ec2 linux, the job fails, and the only info I can get is this:
Status reason: Task failed to start
I was wondering if anyone has any idea of what might be causing the error.
I think I fixed the problem.
The run_job.sh script in the docker image has to have execute permissions to be run by AWS Batch (but I think this is true in general).
For some reason, when the image is built from Windows, the script has this permission, but it doesn't when the image is built from linux (aws ec2 - ubuntu instance).
I fixed the problem by adding the following line in the Dockerfile:
RUN chmod u+x run_job.sh
I’m trying to create a docker container that will execute a BigQuery query. I started with the Google provided image that had gcloud already and I add my bash script that has my query. I'm passing my service account key as an environment file.
Dockerfile
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:latest
COPY main.sh main.sh
main.sh
gcloud auth activate-service-account X#Y.iam.gserviceaccount.com --key-file=/etc/secrets/service_account_key.json
bq query --use_legacy_sql=false
The gcloud command successfully authenticates but can't save to /.config/gcloud saying it is read-only. I've tried modifying that folders permissions during build and struggling to get it right.
Is this the right approach or is there a better way? If this is the right approach, how can I get ensure gcloud can write to the necessary folder?
See the example at the bottom of the Usage section.
You ought to be able to combine this into a single docker run command:
KEY="service_account_key.json"
echo "
[auth]
credential_file_override = /certs/${KEY}
" > ${PWD}/config
docker run \
--detach \
-env=CLOUDSDK_CONFIG=/config \
--volume=${PWD}/config:/config \
--volume=/etc/secrets/${KEY}:/certs/${KEY} \
gcr.io/google.com/cloudsdktool/cloud-sdk:latest \
bq query \
--use_legacy_sql=false
Where:
--env set the container's value for CLOUDSDK_CONFIG which depends on the first --volume flag which maps the host's config that we created in ${PWD} to the container's /config.
The second --volume flag maps the host's /etc/secrets/${KEY} (per your question) to the container's /certs/${KEY}. Change as you wish.
Suitably configured (🤞), you can run bq
I've not tried this but that should work :-)
When using Deployment Manager and Cloud Function, it seems that when a code change is made to the function, deployment manager doesn't seem to detect and update the function.
To reproduce:
Create helloworld function.
Deploy function with deployment manager.
Make code change.
Deploy again.
Observe that the deployed function has not been updated by visiting the console and examining the source.
How do I invalidate the function so that it is correctly deployed?
This is related to the provided link, github.com/hashicorp/terraform-provider-google/issues/1938.
It seems a hash of the zip is not performed so some kind of change is required to the deployment name or other properties.
My solution was to get the current deployment version, and increment it and pass it as a property to the function.
increment_function_version() {
FUN=$1
[[ -z "$FUN" ]] && echo "error: require function name" && exit 1
if ! gcloud functions describe $FUN --region=europe-west2 2> /dev/null; then
NEW_VERSION=1
else
VERSION=$(gcloud functions describe $FUN --region=europe-west2 | grep versionId | awk '{ print $2 }' | sed "s|\'||g")
NEW_VERSION=$(expr $(($VERSION + 1)))
fi
}
In order to do this with deployment manager, I had to transition from YAML to full python schemas (or Jinja), as properties cannot be passed with using the --config flag.
gcloud deployment-manager --project $PROJECT_ID deployments update $NAME $TEMPLATE_FLAG --template mytemplate.py --properties version:$NEW_VERSION
It's important that you provide a schema with the python template for your imports otherwise the deploy with fail.
I am using AWS Codeartifact within my project as a private NPM registry (and proxy of course) and i have some issues getting the perfect workflow. Right now i have a .sh script which generates me the Auth token for AWS and generates a project local .npmrc file. It pretty much looks like this:
#!/bin/sh
export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token --domain xxxxx \
--domain-owner XXXXXX --query authorizationToken --output text --profile XXXXX`
export REPOSITORY_ENDPOINT=`aws codeartifact get-repository-endpoint --domain xxxxx \
--repository xxxx --format npm --query repositoryEndpoint --output text --profile xxxx`
cat << EOF > .npmrc
registry=$REPOSITORY_ENDPOINT
${REPOSITORY_ENDPOINT#https:}:always-auth=true
${REPOSITORY_ENDPOINT#https:}:_authToken=\${CODEARTIFACT_AUTH_TOKEN}
EOF
Now i dont want to run this script manually of course but it should be part of my NPM build process, so i started with things like this in package.json
"scripts": {
"build": "tsc",
"prepublish": "./scriptabove.sh"
}
When running "npm publish" (for example) the .npmrc is created nicely but i assume since NPM is already running, any changes to npmrc wont get picked up. When i run "npm publish" the second time, it works of course.
My question: Is there any way to hook into the build process to apply the token? I dont want to say to my users "please call the scriptabove.sh first before doing any NPM commands. And i dont like "scriptabove.sh && npm publish" either.
You could create a script like this
publish-package command can be called whatever you want
"scripts": {
"build": "tsc",
"prepublish": "./scriptabove.sh",
"publish-package": "npm run prepublish && npm publish"
}
Explanation:
Use & (single ampersand) for parallel execution.
Use && (double ampersand) for sequential execution.
publish-package will then run the prepublish command first then after run npm publish. This method is a great way to chain npm commands that need to run in sequential order.
For more information on this here's a StackOverflow post about it.
Running NPM scripts sequentially
Currently, I found out a google cloud build happens during building a docker image time(not as I thought in that it would build my image and then execute my image to do all the building). That was in this post
quick start in google cloud build
soooo, I have a Dockerfile that is real simple now like so
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine
RUN mkdir -p ./monobuild
COPY . ./monobuild/
WORKDIR "/monobuild"
RUN ["/bin/bash", "./downloadAndExtract.sh"]
and I have a single downloadAndExtract that downloads any artifacts(zip files) from the last monobuild run that were built(only modified servers are built OR servers that dependend on changes in the last CI build are built like downstream libraries may be changed). This first line just lists urls of the zip files I need in a file...
curl "https://circleci.com/api/v1.1/project/butbucket/Twilio/orderly/latest/artifacts?circle-token=$token" | grep -o 'https://[^"]*zip' > artifacts.txt
while read url; do
echo "Downloading url=$url"
zipFile=${url/*\//}
projectName=${zipFile/.zip/}
echo "Zip filename=$zipFile"
echo "projectName=$projectName"
wget "$url?circle-token=$token"
mv "$zipFile?circle-token=$token" $zipFile
unzip $zipFile
rm $zipFile
cd $projectName
./deployGcloud.sh
cd ..
done <artifacts.txt
echo "DONE"
Of course, the deployGcloud script has these commands in it sooooo this means we are building docker images WHILE building the google cloud build docker image(which still seems funny to me)....
docker build . --tag gcr.io/twix/authservice
docker push gcr.io/twix/authservice
gcloud run deploy staging-admin --region us-west1 --image gcr.io/twix/authservice --platform managed
BOTH docker commands seem to be erroring out on this..
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
while the gcloud command seems to be very happy doing a deploy but just using a previous image we deployed at that location.
So, how to get around this error so my build will work and build N images and deploy them all to cloud run?
oh, I finally figured it out. Google has this weird thing in it's config.yaml files of use this docker image to run a curl command and then on next step use this OTHER dockerr image to run some other command and so on using 5 different images. This is all very confusing so instead, I realized I had to figure out how to create my ONE docker image and just run it as a command so I modify the above to have an ENTRYPOINT instead and then docker build and docker push my image into google. Then, I have a cloudbuild.yaml with a single step image command to run.
In this way, we can tweak our builds easily within our docker image that is just run. This is now way simpler than the complex model that google had setup as it becomes basic do your build in the container however you like and install whatever tools you need in the one docker image.
ie. beware the google quick starts which honestly IMHO are really overcomplicating it compared to circleCI and other systems. (of course, that is just an opinion and each to their own).