Google Cloud Function doesn't update on change when using Deployment Manager - google-cloud-platform

When using Deployment Manager and Cloud Function, it seems that when a code change is made to the function, deployment manager doesn't seem to detect and update the function.
To reproduce:
Create helloworld function.
Deploy function with deployment manager.
Make code change.
Deploy again.
Observe that the deployed function has not been updated by visiting the console and examining the source.
How do I invalidate the function so that it is correctly deployed?

This is related to the provided link, github.com/hashicorp/terraform-provider-google/issues/1938.
It seems a hash of the zip is not performed so some kind of change is required to the deployment name or other properties.
My solution was to get the current deployment version, and increment it and pass it as a property to the function.
increment_function_version() {
FUN=$1
[[ -z "$FUN" ]] && echo "error: require function name" && exit 1
if ! gcloud functions describe $FUN --region=europe-west2 2> /dev/null; then
NEW_VERSION=1
else
VERSION=$(gcloud functions describe $FUN --region=europe-west2 | grep versionId | awk '{ print $2 }' | sed "s|\'||g")
NEW_VERSION=$(expr $(($VERSION + 1)))
fi
}
In order to do this with deployment manager, I had to transition from YAML to full python schemas (or Jinja), as properties cannot be passed with using the --config flag.
gcloud deployment-manager --project $PROJECT_ID deployments update $NAME $TEMPLATE_FLAG --template mytemplate.py --properties version:$NEW_VERSION
It's important that you provide a schema with the python template for your imports otherwise the deploy with fail.

Related

How to remove an image from Artifact Registry automatically

Using gcloud I can list and remove the images I want through those commands:
gcloud artifacts docker images list LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE \
--include-tags --filter="tags:IPLA*" --filter="create_time>2022-04-20T00:00:00"
and then
gcloud artifacts docker images delete LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE:tag
I am trying to automate that so I can filter by tag name and date and run every day or week.
I've tried to use inside a cloud function, but I don't think that is allowed.
const { spawn } = require("child_process");
const listening = spawn('gcloud', ['artifacts', 'docker', 'images', 'list',
'LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE',
'--include-tags',
'--filter="tags:IPLA*"',
'--filter="create_time>2022-04-20T00:00:00"'
]);
listening.stdout.on("data", data => {
console.log(`stdout: ${data}`);
});
listening.stderr.on("data", data => {
console.log(`stderr: ${data}`);
});
listening.on('error', (error) => {
console.log(`error: ${error.message}`);
});
I get this error when running the cloud function:
error: spawn gcloud ENOENT
I accept any other solution like trigger on cloud build, terraform as long is it can live on google cloud.
You use Cloud Functions, a serverless product where you deploy your code that run somewhere, on something that you don't manage.
Here, in your code, you assume that gcloud is installed in the runtime. It's a mistake, you can't perform that assumption (that is wrong!)
However, you can use another serverless product where you manage your runtime environemnt: Cloud Run. The principle is to create your container (and therefore install what you want in it) and then deploy it. That time you can use gcloud command, because you know it exists on the VM.
However, it's not the right option. You have 2 better things
First of all, use something already done for you by a Google Cloud Developer Advocate (Seth Vargo). It's named GCR cleaner and remove images older than something
Or you can use directly the API to perform the exact same operation than GCLOUD bur without gcloud, by invoking the Artifact registry REST API. If you want to cheat and go faster, you can use the gcloud command with the --log-http parameter to display all the API call performed by the CLI. Copy the URL and parameters, and enjoy!!
Initially I started to look in the solution suggested by Guillaume, though it looked too overkill deploying a whole image just to clean the Artifact Registry. Ended up finding a lighter approach.
I create a shell script file to clean the images with the filters I wanted:
#!/usr/bin/env bash
_cleanup() {
image_path="$location-docker.pkg.dev/$project_id/$repository_id/$image_name"
echo "Starting to filter: $image_path"
tags=$(gcloud artifacts docker images list $image_path \
--include-tags \
--filter="tags:IPLA* AND UPDATE_TIME.date('%Y-%m-%d', Z)<=$(date --date="-$older_than_days days" +'%Y-%m-%d')" \
--format='value(TAGS)')
if [ -z "$tags" ]; then
echo "No images to clean"
else
echo "Images found: $tags"
for tag in $tags; do
echo "Deleting image: $image_path:$tag"
gcloud artifacts docker images delete "$image_path:$tag" --quiet
done
fi
}
location=$1
project_id=$2
repository_id=$3
image_name=$4 #In this case I just want to clean the old branchs for same image
older_than_days=$5 #7 - Number of days in the repository
_cleanup
echo
echo "DONE"
Then I created a scheduled trigger on Cloud Build for the following cloudbuild.yaml file:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: Clean up older versions
entrypoint: 'bash'
args: [ 'cleanup-old-images.sh', '$_LOCATION', '$PROJECT_ID','$_REPOSITORY_ID', '$_IMAGE_NAME', '$_OLDER_THAN_DAYS' ]
timeout: 1200s
##!/usr/bin/env bash
_cleanup() {
image_path="$2-docker.pkg.dev/$project_id/$1"
echo "Starting to filter: $image_path"
images=$(gcloud artifacts docker images list $image_path \
--filter="UPDATE_TIME.date('%Y-%m-%d', Z)<=$(date --date="-1 years" +'%Y-%m-%d')" \
--format='value(IMAGE)')
if [ -z "$images" ]; then
echo "No images to clean"
else
echo "Images found: $images"
for each in $images; do
echo "Deleting image: $image_path:$each"
gcloud artifacts docker images delete "$images" --quiet
done
fi
}
project_id=$1
gcloud artifacts repositories list --format="value(REPOSITORY,LOCATION)" --project=$project_id | tee -a repo.txt
while read p; do
sentence=$p
stringarray=($sentence)
_cleanup ${stringarray[0]} ${stringarray[1]}
done < repo.txt
echo
echo "DONE"
rm -rf repo.txt
echo "Deleteing repo.txt file"

AWS CDK - post deployment actions

is anyone aware of a method to execute post-deploy functionality. Follwing is a sample of a casual CDK app.
app = core.App()
Stack(app, ...)
app.synth()
What I am looking for is a way to apply some logic after template is deployed. The thing is the app completes before cdk tool starts deploying template.
thanks
Edit: CDK now has https://github.com/cdklabs/cdk-triggers, which allows calling Lambda functions before/after resource/stack creation
You can't do that from CDK at the moment. See https://github.com/awslabs/aws-cdk/issues/2849. Maybe add your +1 there, let them know you'd like to see this feature.
What you can do is wrap cdk deploy in a shell script that will run whatever you need after the CDK is done. Something like:
#!/bin/sh
cdk deploy "$#"
success=$?
if [ $success != 0 ]; then
exit $success
fi
run_post_deploy_with_arguments.sh "$#"
will run deploy with the given arguments, then call a shell scripts passing it the same arguments if deployment was successful. This is a very crude example.
Instead of wrapping the cdk deploy command in a bash script I find it more convenient to add a pre and post deployment script under a cdk_hooks.sh file and call it before and after the CDK deployment command via the cdk.json file. In this way you can keep using the cdk deploy command without calling custom scripts manually.
cdk.json
{
"app": "sh cdk_hooks.sh pre && npx ts-node bin/stacks.ts && sh cdk_hooks.sh post"
,
"context": {
"#aws-cdk/core:enableStackNameDuplicates": "true",
"aws-cdk:enableDiffNoFail": "true"
}
}
and cdk_hooks.sh
#!/bin/bash
PHASE=$1
case "$PHASE" in
pre)
# Do something
;;
post)
# Do something
;;
*)
echo "Please provide a valid cdk_hooks phase"
exit 64
esac
You can use CustomResource to run some code in a lambda (which you will also need to deploy unfortunately). The lambda will get the event of the custom resource (create, update delete), so you will be able to handle different scenarios (let's say you want to seed some table after deploy, this way you will be able to clean the data at a destroy for instance).
Here is a pretty good post about it.
Personally I couldn't find a more elegant way to do this.
Short answer: you can't. I've been waiting for this feature as well.
What you can do is wrap your deployment in a custom script that performs all your other logic, which also makes sense given that what you want to do is probably not strictly a "deploy thing" but more like "configure this and that now that the deploy is finished".
Another solution would be to rely on codebuild to perform your deploys and define there all your steps and which custom scripts to run after a deploy (I personally use this solution, with a specific stack to deploy this particular codedeploy project).

Semantic versioning with AWS CodeBuild

Currently my team is using Jenkins to manage our CI/CD workflow. As our infrastructure is entirely in AWS I have been looking into migrating to AWS CodePipeline/CodeBuild to manage this.
In current state, we are versioning our artifacts as such <major>.<minor>.<patch>-<jenkins build #> i.e. 1.1.1-987. However, CodeBuild doesn't seem to have any concept of a build number. As artifacts are stored in s3 like <bucket>/<version>/<artifact> I would really hate to lose this versioning approach.
CodeBuild does provide a few env variables that i can see here: http://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref.html#build-env-ref-env-vars
But from what is available it seems silly to try to use the build ID or anything else.
Is there anything readily available from CodeBuild that could support an incremental build #? Or is there an AWS recommended approach to semantic versioning? Searching this topic returns remarkably low results
Any help or suggestions is greatly appreciated
The suggestion to use date wasn't really going to work for our use case. We ended up creating a base version in SSM and creating a script that runs within the buildspec that grabs, increments, and updates the version back to SSM. It's easy enough to do:
Create a String/SecureString within SSM as [NAME]. For example lets say "BUILD_VERSION". The value should be in [MAJOR.MINOR.PATCH] or [MAJOR.PATCH].
Create a shell script. The one below should be taken as a basic template, you will have to modify it to your needs:
#!/bin/bash
if [ "$1" = 'next' ]; then
version=$(aws ssm get-parameter --name "BUILD_VERSION" --region 'us-east-1' --with-decryption | sed -n -e 's/.*Value\"[^\"]*//p' | sed -n -e 's/[\"\,]//gp')
majorminor=$(printf $version | grep -o ^[0-9]*\\.[0-9]*\. | tr -d '\n')
patch=$(printf $version | grep -o [0-9]*$ | tr -d '\n')
patch=$(($patch+1))
silent=$(aws ssm put-parameter --name "BUILD_VERSION" --value "$majorminor$patch" --type "SecureString" --overwrite)
echo "$majorminor$patch"
fi
Call the versioning script from within buildspec and use the output however you need.
It may be late while I post this answer, however since this feature is not yet released by AWS this may help a few people in a similar boat.
We used Jenkins build numbers for versioning and were migrating to codebuild/code-pipeline. codebuild-id did not work for us as it was very random.
So in the interim we create our own build number in buildspec file
BUILD_NUMBER=$(date +%y%m%d%H%M%S).
This way at least we are able to look at the id and know when it was deployed and have some consistency in the numbering.
So in your case, it would be 1.1.1-181120193918 instead of 1.1.1-987.
Hope this helps.
CodeBuild supports semantic versioning.
In the configuration for the CodeBuild project you need to enable semantic versioning (or set overrideArtifactName via the CLI/API).
Then in your buildspec.yml file specify a name using the Shell command language:
artifacts:
files:
- '**/*'
name: myname-$(date +%Y-%m-%d)
Caveat: I have tried lots of variations of this and cannot get it to work.

Get the Default GCP Project ID with a Cloud SDK CLI One-Liner

I’m looking for a gcloud one-liner to get the default project ID ($GCP_PROJECT_ID).
The list command gives me:
gcloud config list core/project
#=>
[core]
project = $GCP_PROJECT_ID
Your active configuration is: [default]
While I only want the following output:
gcloud . . .
#=>
$GCP_PROJECT_ID
The easiest way to do this is to use the --format flag with gcloud:
gcloud config list --format 'value(core.project)' 2>/dev/null
The --format flag is available on all commands and gives you full control over what is printed, and how it is formatted.
You can see this help page for full info:
gcloud topic formats
Thanks to comment from Tim Swast above, I was able to use:
export PROJECT_ID=$(gcloud config get-value project)
to get the project ID. Running the get-value command prints the following:
gcloud config get-value project
#=>
Your active configuration is: [default]
$PROJECT_ID
You can also run:
gcloud config get-value project 2> /dev/null
to just print $PROJECT_ID and suppress other warnings/errors.
With Google Cloud SDK 266.0.0 you can use following command:
gcloud config get-value project
Not exactly the gcloud command you specified, but will return you the currently configured project:
gcloud info |tr -d '[]' | awk '/project:/ {print $2}'
Works for account, zone and region as well.
From Cloud Shell or any machine where Cloud SDK is installed, we can use:
echo $DEVSHELL_PROJECT_ID
And as shown in the below screenshot.
I got a question about how to set the environment variable $DEVSHELL_PROJECT_ID; here are the steps:
If the URL has the variable project and is set to some project id, then the environment variable $DEVSHELL_PROJECT_ID usually will be set to the project id.
If the variable project is not set in the URL, we can choose the project from the Combobox (besides the title Google Cloud Platform) which will set the variable project in the URL. We may need to restart the Cloud Shell or refresh the entire web page to set the environment variable $DEVSHELL_PROJECT_ID.
Otherwise, if the environment variable $DEVSHELL_PROJECT_ID is not set, we can set it by the command shown below where we replace PROJECT_ID with the actual project id.
gcloud config set project PROJECT_ID
All these are shown in the below figure.
Direct and easy way to get the default $PROJECT_ID is answered above.
In case you would like to get $PROJECT_ID from the info command, here is a way to do it:
gcloud info --format=flattened | awk '/config.project/ {print $2}'
or:
gcloud info --format=json | jq '.config.project' | tr -d '"'
Just run:
gcloud info --format={flattened|json}
to see the output, then use awk, jq or similar tools to grab what you need.

Reading revision string in post-deploy hook

I though this would be easy but I cannot manage to find a way to get the revision string from a post deploy hook on EBS. The use case is straightforward: I want to warn rollbar of a deploy.
Here is the current script :
# Rollbar deploy notifier
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/90_notify_rollbar.sh":
mode: "000755"
content: |
#!/bin/bash
. /opt/elasticbeanstalk/support/envvars
LOCAL_USERNAME=`whoami`
REVISION=`date +%Y-%m-%d:%H:%M:%S`
curl https://api.rollbar.com/api/1/deploy/ \
-F access_token=$ROLLBAR_KEY \
-F environment=$RAILS_ENV \
-F revision=$REVISION \
-F local_username=$LOCAL_USERNAME
So far I'm using the current date as revision number, but that isn't really helpful. I tried using /opt/elasticbeanstalk/bin/get-config but I couldn't find anything relevant in the environment and container section, and couldn't read anything from meta. Plus, I found no doc about those, so...
Ideally, I would also like the username of the deployer, not the one on the local machine, but that would be the cherry on the cake.
Thanks for your time !
You can update your elastic beanstalk instance profile role (aws-elasticbeanstalk-ec2-role) to allow it to call Elastic Beanstalk APIs. In the post deploy hook you can call DescribeEnvironments with the current environment name using the aws cli or any of the AWS SDKs.
Let me know if you have any more questions about this or if this does not work for you.
I'm also looking for an easy alternative for API. For now I use bash
eb deploy && curl https://api.rollbar.com/api/1/deploy/ -F access_token=xxx -F environment=production -F revision=`git rev-parse --verify HEAD` -F rollbar_username=xxx
Replace xxx with your token and username