AWS CodeBuild Skipping invalid artifact path - not a valid identifier - amazon-web-services

I have an AWS CodeBuild that processes two projects, during the build process the source will get built and bundled in zip files and placed in bundles/*.
The following is how the directory tree looks like, where bundles contains the generated zip files to be deployed:
It uses the following buildspec.yml:
version: 0.2
phases:
install:
commands:
- ./manager.sh install
build:
commands:
- ./manager.sh build
- ./manager.sh package
- ./manager.sh test
- ./manager.sh test:functional
- ./manager.sh test:deploy
post_build:
commands:
- ls -l bundles # I see the artifacts on the console using this
artifacts:
files:
- 'bundles/*'
After the tests, after the building passes the deploy fails.
This returns Skipping invalid artifact path [edited] not a valid identifier . (where it should be bundles)
I have tried multiple combinations of the following:
this one returns Skipping invalid artifact path [edited] not a valid identifier bundles
artifacts:
base-directory: bundles
files:
- '**/*'
Or this one Skipping invalid artifact path [edited] not a valid identifier .
artifacts:
files:
- bundles
here is the full error:
[Container] 2018/02/12 19:13:05 Expanding /codebuild/output/tmp/env.sh: line 69: export: `npm_config_unsafe-perm': not a valid identifier
.
[Container] 2018/02/12 19:13:05 Skipping invalid artifact path /codebuild/output/tmp/env.sh: line 69: export: `npm_config_unsafe-perm': not a valid identifier
.
[Container] 2018/02/12 19:13:05 Phase complete: UPLOAD_ARTIFACTS Success: false
[Container] 2018/02/12 19:13:05 Phase context status code: CLIENT_ERROR Message: No matching base directory path found for /codebuild/output/tmp/env.sh: line 69: export: `npm_config_unsafe-perm': not a valid identifier
.
[Container] 2018/02/12 19:13:07 Runtime error (*errors.errorString: No matching base directory path found for /codebuild/output/tmp/env.sh: line 69: export: `npm_config_unsafe-perm': not a valid identifier
.)
Could it be my docker container?

I tried multiple things, they all kept failing, so the only lead I had was this:
line 69: export: `npm_config_unsafe-perm'
Which appeared multiple times. That lines comes from my docker image. So I figured that maybe aws codebuild was doing a false positive for some reason on that error.
I changed my image from lambci/lambda:build-nodejs6.10 to roelofr/node-zip:latest to do a quick test, and lo and behold it worked with no issues.
SO YES, A DOCKER IMAGE MAY BREAK YOUR BUILD EVEN IF THE REST IS GOOD, BEWARE
So I will change the image to something like a personal image that uses node 6.10.3 just for validation purposes.

Related

Prettier run on CI warns about invalid file that does not exists

I have prettier that was working well but after I added file that didn't follow rules I got error in GitHub action which was correct. Then I added this file to .prettierignore and it didn't solve my problem.
I was trying to figure out why it was happening but now after I removed file that was causing problems I'm still getting error saying that it's badly formatted (simply impossible because file does not exists)
I'm running prettier with following command
prettier . --ignore-path .gitignore "--check"
I'm getting following error
[warn] public/mockServiceWorker.js
[warn] Code style issues found in the above file. Forgot to run Prettier?
ERROR: "format:check" exited with 1.
Error: Process completed with exit code 1.
Here is my workflow file
name: Validation
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
name: Validation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Use Node.js 16.15.0
uses: actions/setup-node#v3
with:
node-version: 16.15.0
- run: npm ci
env:
MY_TOKEN: ${{ secrets.MY_SECRET}}
- run: npm run validate
Of course it does not occur locally
Issue was caused by npm ci creating public/mockServiceWorker.js file and .prettierignore was ignored because I had custom ignore file provided with --ignore-path .gitignore

Terrafrom Init is failing to locate module when providing relative path

I've started building an infrastructure using terraform. Within that TF configuration I am calling a module to be used by using relative path. This is successful in a classic release but I have been tasked with converting the pipeline to yaml. When I run terraform init step the agent finds the Tf config files but can't find the modules folder even though the artifact was downloaded on a previous task.
yaml file:
trigger:
- master
resources:
pipelines:
- pipeline: Dashboard-infra
project: Infrastructure
source: IT Dashboard
- pipeline: Infra-modules
project: Infrastructure
source: AWS Modules
trigger: true
stages:
- stage: Test
displayName: Test
variables:
- group: "Non-Prod Keys"
jobs:
- deployment:
displayName: string
variables:
region: us-east-1
app_name: it-dashboard
environment: test
tf.path: 'IT Dashboard'
pool:
vmImage: 'ubuntu-latest'
environment: test
strategy:
runOnce:
deploy:
steps:
- task: DownloadBuildArtifacts#1
inputs:
buildType: 'specific'
project: '23e9505e-a627-4681-9598-2bd8b6c1204c'
pipeline: '547'
buildVersionToDownload: 'latest'
downloadType: 'single'
artifactName: 'drop'
downloadPath: '$(Agent.BuildDirectory)/s'
- task: DownloadBuildArtifacts#1
inputs:
buildType: 'specific'
project: '23e9505e-a627-4681-9598-2bd8b6c1204c'
pipeline: '88'
buildVersionToDownload: 'latest'
downloadType: 'single'
artifactName: 'Modules'
downloadPath: '$(agent.builddirectory)/s'
- task: ExtractFiles#1
inputs:
archiveFilePatterns: 'drop/infrastructure.zip'
destinationFolder: '$(System.DefaultWorkingDirectory)'
cleanDestinationFolder: false
overwriteExistingFiles: false
- task: ExtractFiles#1
inputs:
archiveFilePatterns: 'Modules/drop.zip'
destinationFolder: '$(System.DefaultWorkingDirectory)'
cleanDestinationFolder: false
overwriteExistingFiles: false
- task: TerraformInstaller#0
inputs:
terraformVersion: '0.12.3'
- task: TerraformTaskV2#2
inputs:
provider: 'aws'
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/$(tf.path)'
commandOptions: '-var "region=$(region)" -var "app_name=$(app.name)" -var "environment=$(environment)"'
backendServiceAWS: 'tf_nonprod'
backendAWSBucketName: 'wdrx-deployments'
backendAWSKey: '$(environment)/$(app.name)/infrastructure/$(region).tfstate'
Raw error log:
2021-10-29T12:30:16.5973748Z ##[section]Starting: TerraformTaskV2
2021-10-29T12:30:16.5981535Z ==============================================================================
2021-10-29T12:30:16.5981842Z Task : Terraform
2021-10-29T12:30:16.5982217Z Description : Execute terraform commands to manage resources on AzureRM, Amazon Web Services(AWS) and Google Cloud Platform(GCP)
2021-10-29T12:30:16.5982555Z Version : 2.188.1
2021-10-29T12:30:16.5982791Z Author : Microsoft Corporation
2021-10-29T12:30:16.5983122Z Help : [Learn more about this task](https://aka.ms/AA5j5pf)
2021-10-29T12:30:16.5983461Z ==============================================================================
2021-10-29T12:30:16.7253372Z [command]/opt/hostedtoolcache/terraform/0.12.3/x64/terraform init -var region=*** -var app_name=$(app.name) -var environment=test -backend-config=bucket=wdrx-deployments -backend-config=key=test/$(app.name)/infrastructure/***.tfstate -backend-config=region=*** -backend-config=access_key=*** -backend-config=secret_key=***
2021-10-29T12:30:16.7532941Z [0m[1mInitializing modules...[0m
2021-10-29T12:30:16.7558115Z - S3-env in ../Modules/S3
2021-10-29T12:30:16.7578267Z - S3-env.Global-Vars in ../Modules/Global-Vars
2021-10-29T12:30:16.7585434Z - global-vars in
2021-10-29T12:30:16.7589958Z [31m
2021-10-29T12:30:16.7597321Z [1m[31mError: [0m[0m[1mUnreadable module directory[0m
2021-10-29T12:30:16.7597847Z
2021-10-29T12:30:16.7599087Z [0mUnable to evaluate directory symlink: lstat ../Modules/global-vars: no such
2021-10-29T12:30:16.7599550Z file or directory
2021-10-29T12:30:16.7599933Z [0m[0m
2021-10-29T12:30:16.7600324Z [31m
2021-10-29T12:30:16.7600779Z [1m[31mError: [0m[0m[1mFailed to read module directory[0m
2021-10-29T12:30:16.7600986Z
2021-10-29T12:30:16.7601405Z [0mModule directory does not exist or cannot be read.
2021-10-29T12:30:16.7601808Z [0m[0m
2021-10-29T12:30:16.7602135Z [31m
2021-10-29T12:30:16.7602573Z [1m[31mError: [0m[0m[1mUnreadable module directory[0m
2021-10-29T12:30:16.7602768Z
2021-10-29T12:30:16.7603271Z [0mUnable to evaluate directory symlink: lstat ../Modules/global-vars: no such
2021-10-29T12:30:16.7603636Z file or directory
2021-10-29T12:30:16.7603964Z [0m[0m
2021-10-29T12:30:16.7604291Z [31m
2021-10-29T12:30:16.7604749Z [1m[31mError: [0m[0m[1mFailed to read module directory[0m
2021-10-29T12:30:16.7604936Z
2021-10-29T12:30:16.7605370Z [0mModule directory does not exist or cannot be read.
2021-10-29T12:30:16.7605770Z [0m[0m
2021-10-29T12:30:16.7743995Z ##[error]Error: The process '/opt/hostedtoolcache/terraform/0.12.3/x64/terraform' failed with exit code 1
2021-10-29T12:30:16.7756780Z ##[section]Finishing: TerraformTaskV2
I have attempted to even move the modules folder inside the tf.path so it is within the same folder as the tf config files and changed the location from "../" to "./". No matter what repo I extract the modules folder to (after downloading as artifact from another build pipeline) it cannot be found when calling it on the tf config files. I am fairly new to DevOps and would appreciate any help or just being pointed in the right direction.
Define system.debug: true variable at global level to enable debug logs - maybe something there will give you a hint:
variables:
system.debug: true
Apart from downloaded artifacts, do you expect to have files checked out from the repo the pipeline is defined in? The deployment job doesn't checkout git files by default, so you may want to add checkout: self to steps there.
Unable to evaluate directory symlink: lstat ../Modules/global-vars - this is suspicious, I wouldn't expect any symlinks in there. But maybe the error message is just misleading.
A useful trick is to log the whole directory structure.
You can do this with a bash script step (might need to apt install tree first):
- script: tree
Or with powershell (will work on MS-hosted linux agent):
- pwsh: Get-ChildItem -Path '$(agent.builddirectory)' -recurse

ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1

I am trying to deploy my first app in google cloud bucket by using bitbucket pipeline, but I am getting the following error in google cloud console.
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
ERROR
The command '/bin/sh -c yarn install --production || ((if [ -f yarn-error.log ]; then cat yarn-error.log; fi) && false)' returned a non-zero code: 1
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
error Found incompatible module
error acp-web#1.0.0: The engine "node" is incompatible with this module. Expected version "9.11.1". Got "9.11.2"
[1/5] Validating package.json...
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
yarn install v1.15.2
---> Running in c25c801a41d0
Step 5/6 : RUN yarn install --production || ((if [ -f yarn-error.log ]; then cat yarn-error.log; fi) && false)
---> 9a31a847bb75
[...]
Basically, I have an app in React Js which need to be deploy in google cloud, I have resolved all the bugs successfully, but at this time I am not getting what is the issue
bitbucket-pipeline.yml
image: node:10.15.1
pipelines:
default:
- step:
name: Build and Test
script:
- npm install
- npm test
- step:
name: Deploy
script:
- pipe: atlassian/google-app-engine-deploy:0.2.1
variables:
KEY_FILE: $KEY_FILE
PROJECT: '[project-name] is here'
app.yaml
env: flex
runtime: custom
api_version: 1
threadsafe: true
handlers:
- url: /(.*\.(html|css|js|png|jpg|woff|json))
static_files: dist/\1
upload: dist/(.*\.(html|css|js|png|jpg|woff|json))
- url: /.*
static_files: dist/index.html
upload: dist/index.html
- url: /
static_dir: build
skip_files:
- node_modules/
- ^\.git/.*
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?.*\.bak$
I just wanna deploy this app into google cloud app engine
It appears to be using the incorrect version of node.js, as per this line:
error acp-web#1.0.0: The engine "node" is incompatible with this module. Expected version "9.11.1". Got "9.11.2"
You're specifying 10.15.1 in your pipeline, though. Can you ensure that the proper version is being applied for your project?
In my case there were people using yarn and npm in the same project. Once I went into the repo and ran npm install it updated a few packages and the docker workflow was fine after.
In app.yaml file, you need to mention:
runtime: nodejs

Upload CodeBuild artifacts *if* they exist

I have a simple CodeBuild spec that defines artifacts to be uploaded after tests run:
artifacts:
files:
- cypress/**/*.png
discard-paths: yes
These artifacts are only generated if the test-action fails (a screenshot is captured of the failing test screen) and are being successfully uploaded to S3.
In the case that tests succeed, no .png files will be generated and the CodeBuild action fails:
[Container] 2018/09/21 20:06:34 Expanding cypress/**/*.png
[Container] 2018/09/21 20:06:34 Phase complete: UPLOAD_ARTIFACTS Success: false
[Container] 2018/09/21 20:06:34 Phase context status code: CLIENT_ERROR Message: no matching artifact paths found
Is there a way to conditionally upload files if they exist in the buildspec?
Alternatively I could use the s3 cli -- in which case I would need a way to easily access the bucket name and artifact key.
To get around this, I'm creating a placeholder file that matches the glob pattern if build succeeds:
post_build:
commands:
- if [ -z "$CODEBUILD_BUILD_SUCCEEDING" ]; then echo "Build failing, no need to create placeholder image"; else touch cypress/0.png; fi
artifacts:
files:
- cypress/**/*.png
discard-paths: yes
If any one still looking for solution base on tgk answer.
in my cas I wanna upload the artifact only in master ENV , so other than master I create a place holder and upload in a TMP folder.
post_build:
commands:
#creating a fake file to workaround fail upload in non prod build
- |
if [ "$ENV" = "master" ]; then
export FOLDERNAME=myapp-$(date +%Y-%m-%d).$((BUILD_NUMBER))
else
touch myapp/0.tmp;
export FOLDERNAME="TMP"
fi
artifacts:
files:
- myapp/build/outputs/apk/prod/release/*.apk
- myapp/*.tmp
discard-paths: yes
name: $FOLDERNAME

AWS CodeBuild + CodePipeline: "No matching artifact paths found"

I am attempting to get CodePipeline to fetch my code from GitHub and build it with CodeBuild. The first (Source) step works fine. But the second (Build) step fails during the "UPLOAD_ARTIFACTS" part. Here are the relevant log statements:
[Container] 2017/01/12 17:21:31 Assembling file list
[Container] 2017/01/12 17:21:31 Expanding MyApp
[Container] 2017/01/12 17:21:31 Skipping invalid artifact path MyApp
[Container] 2017/01/12 17:21:31 Phase complete: UPLOAD_ARTIFACTS Success: false
[Container] 2017/01/12 17:21:31 Phase context status code: ARTIFACT_ERROR Message: No matching artifact paths found
[Container] 2017/01/12 17:21:31 Runtime error (No matching artifact paths found)
My app has a buildspec.yml in its root folder. It looks like:
version: 0.1
phases:
build:
commands:
- echo `$BUILD_COMMAND`
artifacts:
discard-paths: yes
files:
- MyApp
It would appear that the "MyApp" in my buildspec.yml should be something different, but I'm pouring through all of the AWS docs to no avail (what else is new?). How can I get it to upload the artifact correctly?
The artifacts should refer to files downloaded from your Source action or generated as part of the Build action in CodePipeline. For example, this is from a buildspec.yml I wrote:
artifacts:
files:
- appspec.yml
- target/SampleMavenTomcatApp.war
- scripts/*
When I see that you used MyApp in your artifacts section, it makes me think you're referring to the OutputArtifacts of the Source action of CodePipeline. Instead, you need to refer to the files it downloads and stores there (i.e. S3) and/or it generates and stores there.
You can find a sample of a CloudFormation template that uses CodePipeline, CodeBuild, CodeDeploy, and CodeCommit here: https://github.com/stelligent/aws-codedeploy-sample-tomcat/blob/master/codebuild-cpl-cd-cc.json The buildspec.yml is in the same forked repo.
Buildspec artifacts are information about where CodeBuild can find the build output and how CodeBuild prepares it for uploading to the Amazon S3 output bucket.
For the error "No matching artifact paths found" Couple of things to check:
Artifacts file(s) specified on buildspec.yml file has correct path and file name.
artifacts:
files:
-'FileNameWithPath'
If you are using .gitignore file, make sure file(s) specified on Artifacts section
is not included in .gitignore file.
Hope this helps.
In my case I received this error because I had changed directory in my build stage (the java project I am building is in a subdirectory) and did not change back to the root. Adding cd ..at the end of the build stage did the trick.
I had the similar issue, and the solution to fix the problem was "packaging directories and files inside the archive with no further root folder creation".
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-war-hw.html
Artifacts are the stuff you want from your build process - whether compiled in some way or just files copied straight from the source. So the build server pulls in the code, compiles it as per your instructions, then copies the specified files out to S3.
In my case using Spring Boot + Gradle, the output jar file (when I gradle bootJar this on my own system) is placed in build/libs/demo1-0.0.1-SNAPSHOT.jar, so I set the following in buildspec.yml:
artifacts:
files:
- build/libs/*.jar
This one file appears for me in S3, optionally in a zip and/or subfolder depending on the options chosen in the rest of the Artifacts section
try using the version 0.2 of the buildspec
here is a typical example for nodejs
version: 0.2
phases:
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- npm install
- npm run build
post_build:
commands:
- echo Build completed on
artifacts:
files:
- appspec.yml
- build/*
If you're like me and ran into this problem whilst using Codebuild within a CodePipeline arrangement.
You need to use the following
- printf '[{"name":"container-name-here","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > $CODEBUILD_SRC_DIR/imagedefinitions.json
There was the same issue as #jd96 wrote. I needed to return to the root directory of the project to export artifact.
build:
commands:
- cd tasks/jobs
- make build
- cd ../..
post_build:
commands:
- printf '[{"name":"%s","imageUri":"%s"}]' $IMAGE_REPO_NAME $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json