Prettier run on CI warns about invalid file that does not exists - prettier

I have prettier that was working well but after I added file that didn't follow rules I got error in GitHub action which was correct. Then I added this file to .prettierignore and it didn't solve my problem.
I was trying to figure out why it was happening but now after I removed file that was causing problems I'm still getting error saying that it's badly formatted (simply impossible because file does not exists)
I'm running prettier with following command
prettier . --ignore-path .gitignore "--check"
I'm getting following error
[warn] public/mockServiceWorker.js
[warn] Code style issues found in the above file. Forgot to run Prettier?
ERROR: "format:check" exited with 1.
Error: Process completed with exit code 1.
Here is my workflow file
name: Validation
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
name: Validation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Use Node.js 16.15.0
uses: actions/setup-node#v3
with:
node-version: 16.15.0
- run: npm ci
env:
MY_TOKEN: ${{ secrets.MY_SECRET}}
- run: npm run validate
Of course it does not occur locally

Issue was caused by npm ci creating public/mockServiceWorker.js file and .prettierignore was ignored because I had custom ignore file provided with --ignore-path .gitignore

Related

Serverless Deployment Fails Only in Production Due to Unresolved Variable in Serverless.yml

Problem
AWS Codebuild is throwing the following error during my TestBuild:
Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "provider.environment.SECRET": Value not found at "file" source,
- Cannot resolve variable at "provider.environment.REG": Value not found at "file" source,
- Cannot resolve variable at "provider.environment.DBSQUEMA": Value not found at "file" source
The relevant part of the Buildspec file looks like this:
provider:
stage: ${opt:stage, 'dev'}
environment:
STAGE: ${self:provider.stage, 'dev'}
SECRET:${file(./global_config.yml):${self:provider.stage}.secret}
REG: ${file(./global_config.yml):${self:provider.stage}.reg}
DBSCHEMA:${file(./global_config.yml):${self:provider.stage}.dbschema}
The strange thing is that this problem does not happen when stage is set to UAT. It only happens when stage is set to Production. The relevant part of global_config.yml looks like this:
`uat:
secret: 'secret_string',
reg: 'reg_string',
dbschema: 'schema_string'
production:
secret: 'secret_string',
reg: 'reg_string',
dbschema: 'schema_string'`
My point is that the file exists and the variables are there for each stage (no spelling mistakes or anything like that.)
This error occurs when running the following command: sls create_domain --stage "$STAGE"
Environment:
npm install -g serverless#3.19.0
npm install -g serverless-domain-manager#6.0.3
npm install -g serverless-python-requirements#5.4.0
npm install -g serverless-wsgi#3.0.0
We have tried changing {opt: stage}, {sls: stage} instead of {self: provider.stage}, but the same error occurs only when stage is production.
Thanks for your help.

Terrafrom Init is failing to locate module when providing relative path

I've started building an infrastructure using terraform. Within that TF configuration I am calling a module to be used by using relative path. This is successful in a classic release but I have been tasked with converting the pipeline to yaml. When I run terraform init step the agent finds the Tf config files but can't find the modules folder even though the artifact was downloaded on a previous task.
yaml file:
trigger:
- master
resources:
pipelines:
- pipeline: Dashboard-infra
project: Infrastructure
source: IT Dashboard
- pipeline: Infra-modules
project: Infrastructure
source: AWS Modules
trigger: true
stages:
- stage: Test
displayName: Test
variables:
- group: "Non-Prod Keys"
jobs:
- deployment:
displayName: string
variables:
region: us-east-1
app_name: it-dashboard
environment: test
tf.path: 'IT Dashboard'
pool:
vmImage: 'ubuntu-latest'
environment: test
strategy:
runOnce:
deploy:
steps:
- task: DownloadBuildArtifacts#1
inputs:
buildType: 'specific'
project: '23e9505e-a627-4681-9598-2bd8b6c1204c'
pipeline: '547'
buildVersionToDownload: 'latest'
downloadType: 'single'
artifactName: 'drop'
downloadPath: '$(Agent.BuildDirectory)/s'
- task: DownloadBuildArtifacts#1
inputs:
buildType: 'specific'
project: '23e9505e-a627-4681-9598-2bd8b6c1204c'
pipeline: '88'
buildVersionToDownload: 'latest'
downloadType: 'single'
artifactName: 'Modules'
downloadPath: '$(agent.builddirectory)/s'
- task: ExtractFiles#1
inputs:
archiveFilePatterns: 'drop/infrastructure.zip'
destinationFolder: '$(System.DefaultWorkingDirectory)'
cleanDestinationFolder: false
overwriteExistingFiles: false
- task: ExtractFiles#1
inputs:
archiveFilePatterns: 'Modules/drop.zip'
destinationFolder: '$(System.DefaultWorkingDirectory)'
cleanDestinationFolder: false
overwriteExistingFiles: false
- task: TerraformInstaller#0
inputs:
terraformVersion: '0.12.3'
- task: TerraformTaskV2#2
inputs:
provider: 'aws'
command: 'init'
workingDirectory: '$(System.DefaultWorkingDirectory)/$(tf.path)'
commandOptions: '-var "region=$(region)" -var "app_name=$(app.name)" -var "environment=$(environment)"'
backendServiceAWS: 'tf_nonprod'
backendAWSBucketName: 'wdrx-deployments'
backendAWSKey: '$(environment)/$(app.name)/infrastructure/$(region).tfstate'
Raw error log:
2021-10-29T12:30:16.5973748Z ##[section]Starting: TerraformTaskV2
2021-10-29T12:30:16.5981535Z ==============================================================================
2021-10-29T12:30:16.5981842Z Task : Terraform
2021-10-29T12:30:16.5982217Z Description : Execute terraform commands to manage resources on AzureRM, Amazon Web Services(AWS) and Google Cloud Platform(GCP)
2021-10-29T12:30:16.5982555Z Version : 2.188.1
2021-10-29T12:30:16.5982791Z Author : Microsoft Corporation
2021-10-29T12:30:16.5983122Z Help : [Learn more about this task](https://aka.ms/AA5j5pf)
2021-10-29T12:30:16.5983461Z ==============================================================================
2021-10-29T12:30:16.7253372Z [command]/opt/hostedtoolcache/terraform/0.12.3/x64/terraform init -var region=*** -var app_name=$(app.name) -var environment=test -backend-config=bucket=wdrx-deployments -backend-config=key=test/$(app.name)/infrastructure/***.tfstate -backend-config=region=*** -backend-config=access_key=*** -backend-config=secret_key=***
2021-10-29T12:30:16.7532941Z [0m[1mInitializing modules...[0m
2021-10-29T12:30:16.7558115Z - S3-env in ../Modules/S3
2021-10-29T12:30:16.7578267Z - S3-env.Global-Vars in ../Modules/Global-Vars
2021-10-29T12:30:16.7585434Z - global-vars in
2021-10-29T12:30:16.7589958Z [31m
2021-10-29T12:30:16.7597321Z [1m[31mError: [0m[0m[1mUnreadable module directory[0m
2021-10-29T12:30:16.7597847Z
2021-10-29T12:30:16.7599087Z [0mUnable to evaluate directory symlink: lstat ../Modules/global-vars: no such
2021-10-29T12:30:16.7599550Z file or directory
2021-10-29T12:30:16.7599933Z [0m[0m
2021-10-29T12:30:16.7600324Z [31m
2021-10-29T12:30:16.7600779Z [1m[31mError: [0m[0m[1mFailed to read module directory[0m
2021-10-29T12:30:16.7600986Z
2021-10-29T12:30:16.7601405Z [0mModule directory does not exist or cannot be read.
2021-10-29T12:30:16.7601808Z [0m[0m
2021-10-29T12:30:16.7602135Z [31m
2021-10-29T12:30:16.7602573Z [1m[31mError: [0m[0m[1mUnreadable module directory[0m
2021-10-29T12:30:16.7602768Z
2021-10-29T12:30:16.7603271Z [0mUnable to evaluate directory symlink: lstat ../Modules/global-vars: no such
2021-10-29T12:30:16.7603636Z file or directory
2021-10-29T12:30:16.7603964Z [0m[0m
2021-10-29T12:30:16.7604291Z [31m
2021-10-29T12:30:16.7604749Z [1m[31mError: [0m[0m[1mFailed to read module directory[0m
2021-10-29T12:30:16.7604936Z
2021-10-29T12:30:16.7605370Z [0mModule directory does not exist or cannot be read.
2021-10-29T12:30:16.7605770Z [0m[0m
2021-10-29T12:30:16.7743995Z ##[error]Error: The process '/opt/hostedtoolcache/terraform/0.12.3/x64/terraform' failed with exit code 1
2021-10-29T12:30:16.7756780Z ##[section]Finishing: TerraformTaskV2
I have attempted to even move the modules folder inside the tf.path so it is within the same folder as the tf config files and changed the location from "../" to "./". No matter what repo I extract the modules folder to (after downloading as artifact from another build pipeline) it cannot be found when calling it on the tf config files. I am fairly new to DevOps and would appreciate any help or just being pointed in the right direction.
Define system.debug: true variable at global level to enable debug logs - maybe something there will give you a hint:
variables:
system.debug: true
Apart from downloaded artifacts, do you expect to have files checked out from the repo the pipeline is defined in? The deployment job doesn't checkout git files by default, so you may want to add checkout: self to steps there.
Unable to evaluate directory symlink: lstat ../Modules/global-vars - this is suspicious, I wouldn't expect any symlinks in there. But maybe the error message is just misleading.
A useful trick is to log the whole directory structure.
You can do this with a bash script step (might need to apt install tree first):
- script: tree
Or with powershell (will work on MS-hosted linux agent):
- pwsh: Get-ChildItem -Path '$(agent.builddirectory)' -recurse

google cloud build: simple test file not found even though shows up with ls

Trying to run a simple bash script on google cloud build. Trying to run it it says it cannot find it, even though ls shows it is there
I've set up a build trigger on google cloud to run a simple test repository on pushes to main branch
The test repository has just two files: the cloudbuild yaml and a simple testfile.sh bash script
Cloudbuild yaml tells it to run this testfile.sh file, but says it cannot find it even though a simple ls arg shows it
I've tried like every combination of ways to run a bash file:
with/without '-c' argument
with/without '.' argument
with/without file shebang
cloudbuild.yaml:
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args: ['-c', 'testfile.sh']
testfile.sh:
echo "Go suck it, world!"
gcloud builds log <log-id>:
starting build "640c5ba5-5906-4296-a80c-9adc54ee84bb"
FETCHSOURCE
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/test-wtf-2734586432/r/test-files
* branch 1d6fc0b27c09cb3421a242764dfe28bc115bf8f5 -> FETCH_HEAD
HEAD is now at 1d6fc0b Fix typo in entrypoint
BUILD
Pulling image: ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
Digest: sha256:adf73ca014822ad8237623d388cedf4d5346aa72c270c5acc01431cc93e18e2d
Status: Downloaded newer image for ubuntu:latest
docker.io/library/ubuntu:latest
bash: testfile.sh: command not found
ERROR
ERROR: build step 0 "ubuntu" failed: step exited with non-zero status: 127
I fixed it
Had to get rid of the '-c' from the args list

Continuous deployment from git using Cloud Build

I am trying to make a build trigger for Cloud Run using this tutorial,
but I get the following error message:
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
Does anyone know why?
EDIT: My project repo is split into frontend and backend folders. I am just trying to deploy my backend folder which contains a go api.
I have followed the tutorial you provided and I encountered the same error message.
It seems like the steps specified inside the cloudbuild.yaml file are requiring a Dockerfile to be created on the repositories root folder. Precisely, the following instruction is building the image on your . folder.
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA', '.']
There are two solutions to your problem. If you need build a docker image, simply creating the Dockerfile will solve your issue. Another solution would be to not use a custom image. I have used the following cloudbuild.yaml file in order to deploy successfully:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- '[SERVICE-NAME]'
- '--image'
- 'gcr.io/cloudrun/hello'
- '--region'
- '[REGION]'
- '--platform'
- 'managed'
Notice how I'm still using a container image (gcr.io/cloudrun/hello).
-- edit
As explained by #guillaume-blaquiere, the tutorial takes for granted that your repository is already working on Cloud Run. You should check a Cloud Run tutorial before this one.
-- edit 2
A third solution that worked for OP is to specify the path of the Dockerfile in the build instruction. That is done by changing the . directory for the relative directory that contains the Dockerfile.
The error says /workspace/Dockerfile: no such file or directory
I suppose your repository does not contain a Dockerfile at its root.

AWS CodeBuild Skipping invalid artifact path - not a valid identifier

I have an AWS CodeBuild that processes two projects, during the build process the source will get built and bundled in zip files and placed in bundles/*.
The following is how the directory tree looks like, where bundles contains the generated zip files to be deployed:
It uses the following buildspec.yml:
version: 0.2
phases:
install:
commands:
- ./manager.sh install
build:
commands:
- ./manager.sh build
- ./manager.sh package
- ./manager.sh test
- ./manager.sh test:functional
- ./manager.sh test:deploy
post_build:
commands:
- ls -l bundles # I see the artifacts on the console using this
artifacts:
files:
- 'bundles/*'
After the tests, after the building passes the deploy fails.
This returns Skipping invalid artifact path [edited] not a valid identifier . (where it should be bundles)
I have tried multiple combinations of the following:
this one returns Skipping invalid artifact path [edited] not a valid identifier bundles
artifacts:
base-directory: bundles
files:
- '**/*'
Or this one Skipping invalid artifact path [edited] not a valid identifier .
artifacts:
files:
- bundles
here is the full error:
[Container] 2018/02/12 19:13:05 Expanding /codebuild/output/tmp/env.sh: line 69: export: `npm_config_unsafe-perm': not a valid identifier
.
[Container] 2018/02/12 19:13:05 Skipping invalid artifact path /codebuild/output/tmp/env.sh: line 69: export: `npm_config_unsafe-perm': not a valid identifier
.
[Container] 2018/02/12 19:13:05 Phase complete: UPLOAD_ARTIFACTS Success: false
[Container] 2018/02/12 19:13:05 Phase context status code: CLIENT_ERROR Message: No matching base directory path found for /codebuild/output/tmp/env.sh: line 69: export: `npm_config_unsafe-perm': not a valid identifier
.
[Container] 2018/02/12 19:13:07 Runtime error (*errors.errorString: No matching base directory path found for /codebuild/output/tmp/env.sh: line 69: export: `npm_config_unsafe-perm': not a valid identifier
.)
Could it be my docker container?
I tried multiple things, they all kept failing, so the only lead I had was this:
line 69: export: `npm_config_unsafe-perm'
Which appeared multiple times. That lines comes from my docker image. So I figured that maybe aws codebuild was doing a false positive for some reason on that error.
I changed my image from lambci/lambda:build-nodejs6.10 to roelofr/node-zip:latest to do a quick test, and lo and behold it worked with no issues.
SO YES, A DOCKER IMAGE MAY BREAK YOUR BUILD EVEN IF THE REST IS GOOD, BEWARE
So I will change the image to something like a personal image that uses node 6.10.3 just for validation purposes.