Is there any type of AWS CodeBuild environment variables that can aid in stamping versioning information on build artifacts? . i.e. the equivalents of what Bamboo has such as bamboo_buildNumber. Ideally I would want both build number and SCM number.
The docs talk about CODEBUILD_x variables for internal use, but I'm unable to find a listing of them.
Reference to environment variables vended by CodeBuild for consumption is listed here: http://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref.html#build-env-ref-env-vars
For build number related information, you can use CODEBUILD_BUILD_ID or CODEBUILD_BUILD_ARN. For the source related information, depending on how the build was triggered and what the input parameters to the build were (e.g. if you've specified source version while starting your build -- reference), you can additionally use CODEBUILD_SOURCE_VERSION or CODEBUILD_SOURCE_REPO_URL environment variables.
CodeBuild documentation is not yet updated with the detailed information of these updated environment variables.
Thanks!
Amazon has very recently added a BuildNumber environment variable.
According to https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html
CODEBUILD_BUILD_NUMBER: The current build number for the project.
There are lots of environment variables, but in my experience they are not very reliable as they depend on how the build is triggered. The most useful ones seem to be:
echo "Region = ${AWS_REGION}"
echo "Account Id = $(echo $CODEBUILD_BUILD_ARN | cut -f5 -d ':')"
echo "Repo Name = $(echo $CODEBUILD_SOURCE_VERSION | cut -f2 -d '/')"
echo "Commit Id = ${CODEBUILD_RESOLVED_SOURCE_VERSION}"
Which outputs:
Region = us-west-2
Account Id = 0123456789
Repo Name = my-app
Commit Id = a46218c9160f932f2a91748a449b3f9818964642
Related
Hi I have my secrets in Secretmanager in one project and want to know how to copy them or migrate them to other project.
Is there a mechanism to do it smoothly.
As of today there is no way to have GCP move the Secret between projects for you.
It's a good feature request that you can file here: https://b.corp.google.com/issues/new?component=784854&pli=1&template=1380926
edited according to John Hanley's comment
I just had to deal with something similar myself, and came up with a simple bash script that does what I need. I run Linux.
there are some prerequisites:
download the gcloud cli for your OS.
get the list of secrets you want to migrate (you can do it by setting up the gcloud with the source project gcloud config set project [SOURCE_PROJECT], and then running gcloud secrets list)
then once you have the list, convert it textually to a list in
format "secret_a" "secret_b" ...
the last version of each secret is taken, so it must not be in a "disabled" state, or it won't be able to move it.
then you can run:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}_env_file"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate)
done
rm secret_migrate
what this script does, is set the project to the source one, then get the secrets, and one by one save it to file, and upload it to the target project.
the file is rewritten for each secret and deleted at the end.
you need to replace the secrets array (secret_array), and the project names ([SOURCE_PROJECT], [TARGET_PROJECT]) with your own data.
I used this version below, which also sets a different name, and labels according to the secret name:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate --labels=environment=test,service="${i}")
done
rm secret_migrate
All "secrets" MUST be decrypted and compiled in order to be processed by a CPU as hardware decryption isn't practical for commercial use. Because of this getting your passwords/configuration (in PLAIN TEXT) is as simple as logging into one of your deployments that has the so called "secrets" (plain text secrets...) and typing 'env' a command used to list all environment variables on most Linux systems.
If your secret is a text file just use the program 'cat' to read the file. I haven't found a way to read these tools from GCP directly because "security" is paramount.
GCP has methods of exec'ing into a running container but you could also look into kubectl commands for this too. I believe the "PLAIN TEXT" secrets are encrypted on googles servers then decrypted when they're put into your cluser/pod.
I'm trying to set up an AWS CodeBuild project to run tests to validate PRs and commits on a GitHub repository.
Because of the nature of the repo (a monorepo combining several ML models):
I need to restrict down to only run tests associated with files changed in the PR/commit to keep time+cost under control, but
The tests will typically require reference to other un-changed files in the repo: So can't just only pull changed files through to the build container.
How can a running CodeBuild build triggered by a GitHub PR (as per the docs here) 'see' which files are changed by the PR to selectively execute tests?
In your buildspec file you can perform shell commands, I think you can use some git commands there and echo the result, so you can see them as logs during the build.
You can use git diff --name-only $$CODEBUILD_RESOLVED_SOURCE_VERSION $$CODEBUILD_WEBHOOK_PREV_COMMIT
Where $CODEBUILD_WEBHOOK_PREV_COMMIT is the commit id of the previous commit. And $CODEBUILD_RESOLVED_SOURCE_VERSION is the commit id of the actual one.
Inside a build phase you can check the change with:
- |
if [ "$(git diff --name-only $CODEBUILD_RESOLVED_SOURCE_VERSION $CODEBUILD_WEBHOOK_PREV_COMMIT | grep -e <filde_path>)" != "" ]; then
#your code;
fi
I'm using elastic beanstalk to deploy a Django app. I'd like to SSH on the EC2 instance to execute some shell commands but the environment variables don't seem to be there. I specified them via the AWS GUI (configuration -> environment properties) and they seem to work during the boot-up of my app.
I tried activating and deactivating the virtual env via:
source /var/app/venv/*/bin/activate
Is there some environment (or script I can run) to access an environment with all the properties set? Otherwise, I'm hardly able to run any command like python3 manage.py ... since there is no settings module configured (I know how to specify it manually but my app needs around 7 variables to work).
During deployment, the environment properties are readily available to your .platform hook scripts.
After deployment, e.g. when using eb ssh, you need to load the environment properties manually.
One option is to use the EB get-config tool. The environment properties can be accessed either individually (using the -k option), or as a JSON or YAML object with key-value pairs.
For example, one way to export all environment properties would be:
export $(/opt/elasticbeanstalk/bin/get-config --output YAML environment |
sed -r 's/: /=/' | xargs)
Here the get-config part returns all environment properties as YAML, the sed part replaces the ': ' in the YAML output with '=', and the xargs part fixes quoted numbers.
Note this does not require sudo.
Alternatively, you could refer to this AWS knowledge center post:
Important: On Amazon Linux 2, all environment properties are centralized into a single file called /opt/elasticbeanstalk/deployment/env. You must use this file during Elastic Beanstalk's application deployment process only. ...
The post describes how to make a copy of the env file during deployment, using .platform hooks, and how to set permissions so you can access the file later.
You can also perform similar steps manually, using SSH. Once you have the copy set up, with the proper permissions, you can source it.
Beware:
Note: Environment properties with spaces or special characters are interpreted by the Bash shell and can result in a different value.
Try running the command /opt/elasticbeanstalk/bin/get-config environment after you ssh into the EC2 instance.
If you are trying to access the environment variables in eb script elastic beanstalk
Use this
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
{ "Ref" : "AWSEBEnvironmentName" }
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
Currently my team is using Jenkins to manage our CI/CD workflow. As our infrastructure is entirely in AWS I have been looking into migrating to AWS CodePipeline/CodeBuild to manage this.
In current state, we are versioning our artifacts as such <major>.<minor>.<patch>-<jenkins build #> i.e. 1.1.1-987. However, CodeBuild doesn't seem to have any concept of a build number. As artifacts are stored in s3 like <bucket>/<version>/<artifact> I would really hate to lose this versioning approach.
CodeBuild does provide a few env variables that i can see here: http://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref.html#build-env-ref-env-vars
But from what is available it seems silly to try to use the build ID or anything else.
Is there anything readily available from CodeBuild that could support an incremental build #? Or is there an AWS recommended approach to semantic versioning? Searching this topic returns remarkably low results
Any help or suggestions is greatly appreciated
The suggestion to use date wasn't really going to work for our use case. We ended up creating a base version in SSM and creating a script that runs within the buildspec that grabs, increments, and updates the version back to SSM. It's easy enough to do:
Create a String/SecureString within SSM as [NAME]. For example lets say "BUILD_VERSION". The value should be in [MAJOR.MINOR.PATCH] or [MAJOR.PATCH].
Create a shell script. The one below should be taken as a basic template, you will have to modify it to your needs:
#!/bin/bash
if [ "$1" = 'next' ]; then
version=$(aws ssm get-parameter --name "BUILD_VERSION" --region 'us-east-1' --with-decryption | sed -n -e 's/.*Value\"[^\"]*//p' | sed -n -e 's/[\"\,]//gp')
majorminor=$(printf $version | grep -o ^[0-9]*\\.[0-9]*\. | tr -d '\n')
patch=$(printf $version | grep -o [0-9]*$ | tr -d '\n')
patch=$(($patch+1))
silent=$(aws ssm put-parameter --name "BUILD_VERSION" --value "$majorminor$patch" --type "SecureString" --overwrite)
echo "$majorminor$patch"
fi
Call the versioning script from within buildspec and use the output however you need.
It may be late while I post this answer, however since this feature is not yet released by AWS this may help a few people in a similar boat.
We used Jenkins build numbers for versioning and were migrating to codebuild/code-pipeline. codebuild-id did not work for us as it was very random.
So in the interim we create our own build number in buildspec file
BUILD_NUMBER=$(date +%y%m%d%H%M%S).
This way at least we are able to look at the id and know when it was deployed and have some consistency in the numbering.
So in your case, it would be 1.1.1-181120193918 instead of 1.1.1-987.
Hope this helps.
CodeBuild supports semantic versioning.
In the configuration for the CodeBuild project you need to enable semantic versioning (or set overrideArtifactName via the CLI/API).
Then in your buildspec.yml file specify a name using the Shell command language:
artifacts:
files:
- '**/*'
name: myname-$(date +%Y-%m-%d)
Caveat: I have tried lots of variations of this and cannot get it to work.
I’m looking for a gcloud one-liner to get the default project ID ($GCP_PROJECT_ID).
The list command gives me:
gcloud config list core/project
#=>
[core]
project = $GCP_PROJECT_ID
Your active configuration is: [default]
While I only want the following output:
gcloud . . .
#=>
$GCP_PROJECT_ID
The easiest way to do this is to use the --format flag with gcloud:
gcloud config list --format 'value(core.project)' 2>/dev/null
The --format flag is available on all commands and gives you full control over what is printed, and how it is formatted.
You can see this help page for full info:
gcloud topic formats
Thanks to comment from Tim Swast above, I was able to use:
export PROJECT_ID=$(gcloud config get-value project)
to get the project ID. Running the get-value command prints the following:
gcloud config get-value project
#=>
Your active configuration is: [default]
$PROJECT_ID
You can also run:
gcloud config get-value project 2> /dev/null
to just print $PROJECT_ID and suppress other warnings/errors.
With Google Cloud SDK 266.0.0 you can use following command:
gcloud config get-value project
Not exactly the gcloud command you specified, but will return you the currently configured project:
gcloud info |tr -d '[]' | awk '/project:/ {print $2}'
Works for account, zone and region as well.
From Cloud Shell or any machine where Cloud SDK is installed, we can use:
echo $DEVSHELL_PROJECT_ID
And as shown in the below screenshot.
I got a question about how to set the environment variable $DEVSHELL_PROJECT_ID; here are the steps:
If the URL has the variable project and is set to some project id, then the environment variable $DEVSHELL_PROJECT_ID usually will be set to the project id.
If the variable project is not set in the URL, we can choose the project from the Combobox (besides the title Google Cloud Platform) which will set the variable project in the URL. We may need to restart the Cloud Shell or refresh the entire web page to set the environment variable $DEVSHELL_PROJECT_ID.
Otherwise, if the environment variable $DEVSHELL_PROJECT_ID is not set, we can set it by the command shown below where we replace PROJECT_ID with the actual project id.
gcloud config set project PROJECT_ID
All these are shown in the below figure.
Direct and easy way to get the default $PROJECT_ID is answered above.
In case you would like to get $PROJECT_ID from the info command, here is a way to do it:
gcloud info --format=flattened | awk '/config.project/ {print $2}'
or:
gcloud info --format=json | jq '.config.project' | tr -d '"'
Just run:
gcloud info --format={flattened|json}
to see the output, then use awk, jq or similar tools to grab what you need.