Gitlab Version: v14.1.1
Gitlab pipeline is succeeding even though there is a failed test cases in unit test.
Gitlab.yaml code:
unit-test:
stage: Test
script:
- npm run test
needs:
- lint
artifacts:
when: always
paths:
- coverage
reports:
junit:
- junit.xml
cobertura:
- coverage/cobertura-coverage.xml
expire_in: 4 days
only:
- test-case-testing
- merge_requests
Test Results:
Update: test command used in package.json
"test": "node ./node_modules/nyc/bin/nyc.js --reporter=cobertura --reporter=html node_modules/cucumber/bin/cucumber-js src/use-cases --parallel 5 --format=json --fail-fast --require \"src/use-cases/**/!(index).js\" | cucumber-junit > junit.xml",
How can I abort the Gitlab pipeline when there are any failed test cases? I read this but couldn't figure out what exact changes should I do?
Following change in script managed the failure.
stage: Test
script:
- npm run test
- test -f junit.xml && grep -L "<failure" junit.xml
and this resolved the issue.
Related
I have used the following yml command for my .Net 5 API project and xUnit Test Project but it throws error and my pipeline is not getting succeeded. Where did I go wrong?
Note: The pipeline is not getting succeded even if the task executed the test cases and showing 15 test cases passed and 2 test cases are filed.
- task: DotNetCoreCLI#2
inputs:
command: 'restore'
projects: '**/GeniusData.Test/GeniusData.Test.csproj'
displayName: 'Restore Projects'
- task: DotNetCoreCLI#2
inputs:
command: test
projects: '**/*Test/*.csproj'
arguments: '--configuration $(buildConfiguration) --collect "Code coverage"'
displayName: 'Test Project'
You're using the DotNetCoreCLI#2 task which will always fail when tests fail. That's by design: failing tests should break the build.
I'm currently facing an issue with my Google Cloud Build for CI/CD.
First, I build new docker images of multiple microservices and use Terraform to create the GCP infrastructure for the containers that they will also live in production.
Then I perform some Integration / System Tests and if everything is fine I push new versions of the microservice images to the container registry for later deployment.
My problem is, that the Terraformed infrastructure doesn't get destroyed if the cloud build fails.
Is there a way to always execute a cloud build step even if some previous steps have failed, here I would want to always execute "terraform destroy"?
Or specifically for Terraform, is there a way to define a self-destructive Terraform environment?
cloudbuild.yaml example with just one docker container
steps:
# build fresh ...
- id: build
name: 'gcr.io/cloud-builders/docker'
dir: '...'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/staging/...:latest', '-t', 'gcr.io/$PROJECT_ID/staging/...:$BUILD_ID', '.', '--file', 'production.dockerfile']
# push
- id: push
name: 'gcr.io/cloud-builders/docker'
dir: '...'
args: ['push', 'gcr.io/$PROJECT_ID/staging/...']
waitFor: [build]
# setup terraform
- id: terraform-init
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['init']
waitFor: [push]
# deploy GCP resources
- id: terraform-apply
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['apply', '-auto-approve']
waitFor: [terraform-init]
# tests
- id: tests
name: 'python:3.7-slim'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- 'pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate'
# remove GCP resources
- id: terraform-destroy
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['destroy', '-auto-approve']
waitFor: [tests]
Google Cloud Build doesn't yet support allow_failure or some similar mechanism as mentioned in this unsolved but closed issue.
What you can do, and as mentioned in the linked issue, is to chain shell conditional operators.
If you want to run a command on failure then you can do something like this:
- id: tests
name: 'python:3.7-slim'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate || echo "This failed!"
This would run your test as normal and then echo This failed! to the logs if the tests fail. If you want to run terraform destroy -auto-approve on the failure then you would need to replace the echo "This failed!" with terraform destroy -auto-approve. Of course you will also need the Terraform binaries in the Docker image you are using so will need to use a custom image that has both Python and Terraform in it for that to work.
- id: tests
name: 'example-customer-python-and-terraform-image:3.7-slim-0.12.28'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate || terraform destroy -auto-approve ; false"
The above job also runs false at the end of the command so that it will return a non 0 exit code and mark the job as failed still instead of only failing if terraform destroy failed as well.
An alternative to this would be to use something like Test Kitchen which will automatically stand up infrastructure, run the necessary verifiers and then destroy it at the end all in a single kitchen test command.
It's probably also worth mentioning that your pipeline is entirely serial so you don't need to use waitFor. This is mentioned in the Google Cloud Build documentation:
A build step specifies an action that you want Cloud Build to perform.
For each build step, Cloud Build executes a docker container as an
instance of docker run. Build steps are analogous to commands in a
script and provide you with the flexibility of executing arbitrary
instructions in your build. If you can package a build tool into a
container, Cloud Build can execute it as part of your build. By
default, Cloud Build executes all steps of a build serially on the
same machine. If you have steps that can run concurrently, use the
waitFor option.
and
Use the waitFor field in a build step to specify which steps must run
before the build step is run. If no values are provided for waitFor,
the build step waits for all prior build steps in the build request to
complete successfully before running. For instructions on using
waitFor and id, see Configuring build step order.
I am trying to deploy my first app in google cloud bucket by using bitbucket pipeline, but I am getting the following error in google cloud console.
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
ERROR
The command '/bin/sh -c yarn install --production || ((if [ -f yarn-error.log ]; then cat yarn-error.log; fi) && false)' returned a non-zero code: 1
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
error Found incompatible module
error acp-web#1.0.0: The engine "node" is incompatible with this module. Expected version "9.11.1". Got "9.11.2"
[1/5] Validating package.json...
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
yarn install v1.15.2
---> Running in c25c801a41d0
Step 5/6 : RUN yarn install --production || ((if [ -f yarn-error.log ]; then cat yarn-error.log; fi) && false)
---> 9a31a847bb75
[...]
Basically, I have an app in React Js which need to be deploy in google cloud, I have resolved all the bugs successfully, but at this time I am not getting what is the issue
bitbucket-pipeline.yml
image: node:10.15.1
pipelines:
default:
- step:
name: Build and Test
script:
- npm install
- npm test
- step:
name: Deploy
script:
- pipe: atlassian/google-app-engine-deploy:0.2.1
variables:
KEY_FILE: $KEY_FILE
PROJECT: '[project-name] is here'
app.yaml
env: flex
runtime: custom
api_version: 1
threadsafe: true
handlers:
- url: /(.*\.(html|css|js|png|jpg|woff|json))
static_files: dist/\1
upload: dist/(.*\.(html|css|js|png|jpg|woff|json))
- url: /.*
static_files: dist/index.html
upload: dist/index.html
- url: /
static_dir: build
skip_files:
- node_modules/
- ^\.git/.*
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?.*\.bak$
I just wanna deploy this app into google cloud app engine
It appears to be using the incorrect version of node.js, as per this line:
error acp-web#1.0.0: The engine "node" is incompatible with this module. Expected version "9.11.1". Got "9.11.2"
You're specifying 10.15.1 in your pipeline, though. Can you ensure that the proper version is being applied for your project?
In my case there were people using yarn and npm in the same project. Once I went into the repo and ran npm install it updated a few packages and the docker workflow was fine after.
In app.yaml file, you need to mention:
runtime: nodejs
I'm trying to understand the GitLab Pipelines and after a few tries I was able to successfully automate my unit tests. Now I'm trying to add the code coverage badge into my project and/or readme file but it always seems to show unknown.
Files:
+ application
+ system
- unit-tests
- tests
UtilTest.php
autoload.php
phpunit
.gitignore
.gitlab-ci.yml
.htaccess
index.php
readme.md
.gitlab-ci.yml:
image: php:5.6
stages:
- test
app:unit-tests:
stage: test
script:
- php ./unit-tests/phpunit --bootstrap ./unit-tests/autoload.php ./unit-tests/tests
coverage: '/Code Coverage: \d+\.\d+/'
On the project's Test coverage parsing section I have this set up:
So I was able to fix this by using PHP 7.2 as the Docker image and installing xdebug on the before_script call.
.gitlab-ci.yml:
image: php:7.2
stages:
- test
before_script:
- pecl install xdebug
- docker-php-ext-enable xdebug
app:unit-tests:
stage: test
script:
- php ./unit-tests/phpunit --bootstrap ./unit-tests/autoload.php ./unit-tests/tests --coverage-text --colors=never
coverage: '/^\s*Lines:\s*\d+.\d+\%/'
I had to use PHP 7.2 because when I tried running pecl install xdebug it said it requires PHP 7. Ideally I would like to use PHP 5.6 because that's what our current server has just so the tests are on similar versions but I'll leave it as it is for now.
I had to add --coverage-text --colors=never on the script call for it to output the numbers. Then on the coverage call I changed it to '/^\s*Lines:\s*\d+.\d+\%/' which I also used under the Test coverage parsing section on the project settings.
And now the code coverage properly shows me my expected values.
I have created an automated selenium test script which works perfectly fine.
My task now is to set up Gitlab CI and try to automatically run this selenium script when I make a push to git.
Is it possible to make the selenium script automatically execute and inform the user if the script runs successfully or it fails?
Thank you
How to automatically run Automation Tests on Gitlab Ci with Selenium and specflow with a .net Project ?
If this is something ,you are looking for then .
Here is the core part which is to setup the gitlab-ci.yml file :
Here is how the sample gitlab-ci.yml should look :
image: please give your own docker which can download .net stuff
variables:
DOCKER_DRIVER: overlay2
SOURCE_CODE_DIRECTORY: 'src'
BINARIES_DIRECTORY: 'bin'
OBJECTS_DIRECTORY: 'obj'
NUGET_PACKAGES_DIRECTORY: '.nuget'
stages:
- Build
- Test
before_script:
- 'dotnet restore ${SOURCE_CODE_DIRECTORY}/TestProject.sln --packages ${NUGET_PACKAGES_DIRECTORY}'
Build:
stage: Build
script:
- 'dotnet build $SOURCE_CODE_DIRECTORY/TestProject.sln --no-restore'
except:
- tags
artifacts:
paths:
- '${SOURCE_CODE_DIRECTORY}/*/${BINARIES_DIRECTORY}'
- '${SOURCE_CODE_DIRECTORY}/*/${OBJECTS_DIRECTORY}'
- '${NUGET_PACKAGES_DIRECTORY}'
expire_in: 2 hr
Test:
stage: Test
services:
- selenium/standalone-chrome:latest
script:
- 'export MSBUILDSINGLELOADCONTEXT=1'
- 'export selenium_remote_url=http://selenium__standalone-chrome:4444/wd/hub/'
- 'export PATH=$PATH:${SOURCE_CODE_DIRECTORY}/chromedriver.exe'
- 'dotnet test $SOURCE_CODE_DIRECTORY/ExpressTestProject.sln --no-restore'
artifacts:
paths:
- '${SOURCE_CODE_DIRECTORY}/chromedriver.exe'
- '${SOURCE_CODE_DIRECTORY}/*/${BINARIES_DIRECTORY}'
- '${SOURCE_CODE_DIRECTORY}/*/${OBJECTS_DIRECTORY}'
- '${NUGET_PACKAGES_DIRECTORY}'
Thats it .When you set up your project with this .git-lab-ci.yml ,90 % of your job is done .
The tests will run automatically in Gitlab ,whenever you commit something in your source tree or Tfs.
Thanks