CodeBuild, get the list of folders that have changed files inside them buildspec.yml - amazon-web-services

Im trying to make my first build on aws and this is my buildspec.yml, I was just testing if my command does work on codebuild
version: 0.2
env:
git-credential-helper: yes
phases:
install:
runtime-versions:
nodejs: 16
pre_build:
commands:
- aws codeartifact login ...
build:
commands:
- changed_folders=$(git diff --dirstat=files,0 HEAD~1 | awk '{print $2}' | xargs -I {} dirname {} | awk -F '/' '{print $1}' | sort | uniq)
- echo $changed_folders
this command works on local but when it is building
git diff --dirstat=files,0 HEAD~1 | awk '{print $2}' | xargs -I {} dirname {} | awk -F '/' '{print $1}' | sort | uniq
theres an error saying
fatal: ambiguous argument 'HEAD~1': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
I tried changing the HEAD~1 with $CODEBUILD_WEBHOOK_HEAD_REF it works but im getting an empty result when I echo using echo $changed_folders
Im using github as my repository
Reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html

Normally Codebuild Systems (Jenkins, Github Action, Codebuild) fetch only last commit of (triggerred ref only), so that when you run some git commits about git history you will get empty. So that by default build systems clone source code like git clone --depth 1 -b <branch> <repo_url>.
AWS announced you can fetch full history clone in codebuild.
When you enable artifact option "Full clone" on your pipeline, you will be able to get changed_files

Related

How to get CodeBuild project name from buildspec file

This is my buildspec file to build an Angular, React, Vue or similar project through AWS CodeCommit and publish the resulting artifact to an S3 bucket:
version: 0.2
env:
variables:
S3_BUCKET: "my-bucket"
phases:
install:
runtime-versions:
nodejs: 16
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install
build:
commands:
- echo Build started on `date`
- npm run build
post_build:
commands:
- aws s3 cp dist s3://${S3_BUCKET} --recursive
- echo Build completed on `date`
What I would like to do is to use a subfolder with the name of the project when publishing the result files in the bucket. Now all files go to my-bucket but I would like them to go to my-bucket/name-of-the-project
I could change the post-build command to something like
- aws s3 cp dist s3://${S3_BUCKET}/name-of-the-project --recursive
That way it would be always the same directory name. What I want is to get dynamically the name of the CodeBuild project or from the package.json or similar to make that directory match the project name.
Here are two ways to read a project identifier from the build context at run-time:
Option 1: Read the project name from package.json:
PROJECT_NAME=$(cat package.json | jq -r '.name')
echo $PROJECT_NAME # -> name-of-the-project
Option 2: Extract the CodeCommit repo name from the source URL. Each CodeBuild execution exposes several environment variables, including CODEBUILD_SOURCE_REPO_URL.
echo $CODEBUILD_SOURCE_REPO_URL # -> https://git-codecommit.us-east-1.amazonaws.com/v1/repos/my-repo
REPO_NAME=$(echo $CODEBUILD_SOURCE_REPO_URL | awk -F\"/\" '{print $NF}') # split the url at '/', return the last item
echo $REPO_NAME # -> my-repo
Pass one of the captured names to the S3 command:
aws s3 cp dist s3://${S3_BUCKET}/${PROJECT_NAME} --recursive

What does - means in kubectl -f -

What does the last - (following -f) mean in the following command:
kustomize build config/samples | kubectl apply -f -
Snippet from kubectl documentation:
Apply the JSON passed into stdin to a pod
cat pod.json | kubectl apply -f -

AWS CDK: Uploaded file must be a non-empty zip

I have written a simple hello world lambda function to deploy but after the command cdk deploy it is giving this error. Can someone please guide about this?
This issue might be caused by https://github.com/aws/aws-cdk/issues/12536. You should try:
Upgrading node.js version
Deleting cdk.out
Upgrade to latest CDK version
Delete the asset directly from S3 (bucket will be something like cdk-hnb659fds-assets-<ACCOUNT NUMBER>-<REGION>)
Deploy again
CDK doesn't reupload the asset unless it changed. That's why deleting it and maybe forcing a change after upgrading node.js is required.
If all else fails, try the script I wrote that downloads the asset, fixes it by rezipping, and uploads it again. It's expecting to run in the root of your project as it looks for cdk.out.
#!/bin/bash
set -ex
ASSEMBLY_DIRECTORY=`jq -r '.artifacts[] | select(.type == "cdk:cloud-assembly") | .properties.directoryName' cdk.out/manifest.json`
ASSET_MANIFESTS=`jq -r '.artifacts[] | select(.type == "cdk:asset-manifest") | .properties.file' cdk.out/$ASSEMBLY_DIRECTORY/manifest.json`
cd cdk.out/$ASSEMBLY_DIRECTORY
ASSETS=`jq -r '.files[].destinations[] | "s3://" + .bucketName + "/" + .objectKey' $ASSET_MANIFESTS | grep zip`
TMP=`mktemp -d`
cd $TMP
for ASSET in $ASSETS
do
if aws s3 ls $ASSET; then
aws s3 cp $ASSET pkg.zip
mkdir s
cd s
if ! unzip ../pkg.zip; then echo bad zip; fi
rm ../pkg.zip
zip -r ../pkg.zip * .gitempty
aws s3 cp ../pkg.zip $ASSET
cd ..
rm -rf s
fi
done
rm -rf $TMP
You can confirm you're having the same issue I was having by downloading the asset zip file. Try extracting it with unzip. If it complains about the checksum or CRC, you had the same issue.
Steps helps to resolve it...
delete cdk.out (directory)
run command
cdk synth
cdk bootstrap
cdk deploy
For me it occurred in WSL2.
It turned out it was introduced when I accidentaly npm ied in windows console.
Solution was then:
in WSL2:
rm -r node_modules
rm -r cdk.out
npm i
cdk synth
Then cdk deyploy worked as expected. No bootstrapping necessary.

AWS Code Build: environment variables not found during CI jobs

I usually name artifacts based on the commits they have been build from.
Based on this documentation, CODEBUILD_WEBHOOK_PREV_COMMIT is what I am looking for in AWS Code Build
Here is the buildspec.yml
phases:
install:
commands:
- apt-get update -y
build:
commands:
- export $CODEBUILD_WEBHOOK_PREV_COMMIT
- echo Entered the build phase...
- echo Build started on `date`
- mvn clean install -Dmaven.test.skip=true
- for f in ./target/*.car;do mv -- "$f" $(echo $f | sed -E "s/.car$/_${CODEBUILD_WEBHOOK_PREV_COMMIT}.car/") ;done
artifacts:
files:
- ./target/*.car
Build works but the commit does not show in the final .car name. I would like to understand why.
Hypothesis n°1: VARs needs to be explicitly sourced
I tried the following without much success
env:
variable:
- COMMIT="${CODEBUILD_WEBHOOK_PREV_COMMIT}"
phases:
install:
commands:
- apt-get update -y
build:
commands:
- echo Entered the build phase...
- echo Build started on `date`
- mvn clean install -Dmaven.test.skip=true
- carpath=./*_CA/target/*.car
- for f in $carpath;do mv -- "$f" $(echo $f | sed -E "s/.car$/_${COMMIT}.car/") ;done
VARs are only available to AWS default build container
I am using Maven's official image maven:3.6.3-jdk-8 instead of Amazon's general purpose build image. Are VARs available for custom images? I can't find any clear indication they are not.
I lost entire afternoon on this, for anyone that is coming here with the same problem, here is how I solved it:
First, I've put printenv in commands to see whats going on and $CODEBUILD_WEBHOOK_PREV_COMMIT env variable was completely missing. But you can use $CODEBUILD_RESOLVED_SOURCE_VERSION instead, which was there!

Why codebuild.sh fails to run my local build?

I am trying to test locally my build without needing to upload my code all over the time. Therefore, I downloaded the codebuild.sh into my ubuntu machine and places into ~/.local/bin/codebuild_build.
Then I made it executable via:
chmod +x ~/.local/bin/codebuild_build
And with the following buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- docker login -u $USER -p $TOKEN
build:
commands:
- docker build -f ./dockerfiles/7.0.8/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_708) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.0.8/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_72) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
post_build:
commands:
- docker push etable/php7.2
- docker push etable/php7.2-dev
- docker push etable/php7.0.8
- docker push etable/php7.0.8-dev
I tried to execute my command like that:
codebuild_build -i amazon/aws-codebuild-local -a /tmp/artifacts/docker-php -e .codebuild -c ~/.aws
But I get the following output:
Build Command:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=amazon/aws-codebuild-local" -e "ARTIFACTS=/tmp/artifacts/docker-php" -e "SOURCE=/home/pcmagas/Kwdikas/docker-php" -v "/home/pcmagas/Kwdikas/docker-php:/LocalBuild/envFile/" -e "ENV_VAR_FILE=.codebuild" -e "AWS_CONFIGURATION=/home/pcmagas/.aws" -e "INITIATOR=pcmagas" amazon/aws-codebuild-local:latest
Removing agent-resources_build_1 ... done
Removing agent-resources_agent_1 ... done
Removing network agent-resources_default
Removing volume agent-resources_source_volume
Removing volume agent-resources_user_volume
Creating network "agent-resources_default" with the default driver
Creating volume "agent-resources_source_volume" with local driver
Creating volume "agent-resources_user_volume" with local driver
Creating agent-resources_agent_1 ... done
Creating agent-resources_build_1 ... done
Attaching to agent-resources_agent_1, agent-resources_build_1
build_1 | 2020/01/16 14:43:58 Unable to initialize (*errors.errorString: AgentAuth was not specified)
agent-resources_build_1 exited with code 10
Stopping agent-resources_agent_1 ... done
Aborting on container exit...
My ~/.aws has the following files:
$ ls -l /home/pcmagas/.aws
σύνολο 8
-rw------- 1 pcmagas pcmagas 32 Αυγ 8 17:29 config
-rw------- 1 pcmagas pcmagas 116 Αυγ 8 17:34 credentials
Whilst the config has the following:
[default]
region = eu-central-1
And ~/.aws/credentials is in the following format:
[default]
aws_access_key_id = ^KEY_ID_CENSORED^
aws_secret_access_key = ^ACCESS_KEY_CENSORED^
Also in the .codebuild I contain the required docker-login params:
USER=^CENCORED^
TOKEN=^CENCORED^
Hence, I can get the params required for docker-login.
Do you have any idea why I the build fails to run locally?
Your pre-build step has a command that logs you in to docker
docker login -u $USER -p $TOKEN
Make sure that you have included the docker login credentials in your local file environment file.
Change the environment variable name in '.codebuild' file, e.g.:
DOCKER_USER=^CENCORED^
DOCKER_TOKEN=^CENCORED^
It seems the CodeBuild agent is interpreting the 'TOKEN' environment variable itself.