Regex: Negative matcher on javascript string ^(?!\\[functional\\]).+$ files fails to exclude [functional] - regex

In package.json I have 2 script commands:
"test:unit": "jest --watch --testNamePattern='^(?!\\[functional\\]).+$'",
"test:functional": "jest --watch --testNamePattern='\\[functional\\]'",
copying ^(?!\\[functional\\]).+$ into https://regex101.com/, it does not match the test string below inside argument 1 of describe()
describe("[functional] live tests", () => {
When changed to ([functional]).+$, the pattern does match. I have to remove a pair of \ on each end to remove escapes for .json files (I think).
Here is what I see when running npm run test:unit in my project root:
// the functional test runs (not desired)
$ npm run test:unit
functions/src/classes/__tests__/Functional.test.ts:30:47 - error TS2339: Property 'submit' does not exist on type 'Element'.
30 await emailForm.evaluate(form => form.submit());
~~~~~~
RUNS ...s/__tests__/Functional.test.ts
Test Suites: 1 failed, 1 skipped, 3 passed, 4 of 5 total
Tests: 2 skipped, 16 passed, 18 total
Snapshots: 0 total
Time: 8.965s, estimated 27s
Ran all test suites with tests matching "^(?!\[functional\]).+$".
Active Filters: test name /^(?!\[functional\]).+$/
The functional tests are not built out which explains the syntax error, it's not important here. The key issue, is why the tests were not skipped.
I believe the problem has to do with the regex negative matcher. The positive matcher without the ! only matches tests that have, or are nested in a describe block with [functional]
$ npm run test:functional
Test Suites: 1 failed, 4 skipped, 1 of 5 total
Active Filters: test name /\[functional\]/
Anyone know why the negative regex pattern is failing during npm run test:unit ?

Instead of a regex fix I changed the flag on the unit testing script to an ignore, then copying the matching pattern for [functional]:
"test:unit": "jest --watch --testIgnorePattern='\\[functional\\]'",
"test:functional": "jest --watch --testNamePattern='\\[functional\\]'",

Related

Azure Pipeline: How to save Visual Studio "Test Results" to use with other tasks in the pipeline?

I have a pipeline (classic view) with the task "Visual Studio Test", with task version "2.*".
After the task completes I can see that it prints in the log the test results.
How can I save 'Total Tests' and 'Passed Tests' in variables to use with further tasks of the pipeline?
I tried extracting the .trx file but it gets deleted after the task completes.
Performing VsTest gives me this (Some tests fail, but it's OK):
Adding trx file C:\vsts-agent-win-x64-2.165.2\_work\6\s\TestResults\TestResults\----.trx to run attachments
**************** Completed test execution *********************
Result Attachments will be stored in LogStore
Publishing test results to test run '3748'.
TestResults To Publish 189, Test run id:3748
Test results publishing 189, remaining: 0. Test run id: ---
Published test case results: 189
Result Attachments will be stored in LogStore
Run Attachments will be stored in LogStore
Received the command : Stop
TestExecutionHost.ProcessCommand. Stop Command handled
SliceFetch Aborted. Moving to the TestHostEnd phase
Please use this link to analyze the test run : https://---
Test run '---' is in 'Completed' state with 'Total Tests' : 202 and 'Passed Tests' : 19.
##[error]System.Exception: Some tests in the test run did not pass, failing the task.
##########################################################################
Finishing: VsTest - testPlan
When I try to cd into the TestResults:
+ cd C:\vsts-agent-win-x64-2.165.2\_work\6\s\TestResults\TestResults
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (C:\vsts-agent-w...lts\TestResults:String) [Set-Location], ItemNotFoundE
xception
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand
##[error]PowerShell exited with code '1'.
You can change the default test result output folder by setting the Test results folder field. See below:
Folder to store test results. When this input is not specified, results are stored in $(Agent.TempDirectory)/TestResults by default, which is cleaned at the end of a pipeline run
In above example. The test result .trx file will be stored at $(System.DefaultWorkingDirectory)\TestResults folder which will not be cleaned up.
Then you can extracting the .trx file in the following tasks and save 'Total Tests' and 'Passed Tests' in variables.
See below screenshot from my test pipeline:
Vstest task log:
Powershell task to ls the contents:
So it seems VsTest deletes all its results after the task is complete.
I solved this with a REST API command.
Make sure you convert your Personal Access Token to Base64...
Here's how I did it:
$personalToken = [your token]
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($personalToken)"))
$header = #{authorization = "Basic $token"}
$params = #{
Uri = 'https://dev.azure.com/[organization]/[project]/_apis/test/runs?buildIds=[BuildId]&api-version=6.0'
Headers = $header
Method = 'GET'
}
$output = Invoke-RestMethod #params
$run = $output.value | Where-Object{$_.name -match [BuildId]}
Write-Host "Total Tests: $($run.totalTests)"
Write-Host "Passed Tests: $($run.passedTests)"
Write-Host "Failed Tests: $($run.unanalyzedTests)"
Write-Host "Skipped Tests: $($run.incompleteTests)"

Too many parts after spliting with regexes

I'm trying to parse some logs using split and regexes in powershell
Here's my code :
$string = "Starting ChromeDriver 78.0.3904.70Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code. Test 229: Passed Test 260: Failed. Error message: Status: Test case failed. Steps: Navigate to: PurchReqTableListPage (purchreqpreparedbyme) Use the Quick Filter to find records. For example, filter on the Purchase requisition fION()</StackTrace> </Error> Playback results: Tests: 2 Passed: 1 Failed: 1"
$string -Split '(Test (\d)+:)'
Result :
Starting ChromeDriver 78.0.3904.70Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
Test 229:
9
Passed
Test 260:
0
Failed. Error message: Status: Test case failed. Steps: Navigate to: PurchReqTableListPage (purchreqpreparedbyme) Use the Quick Filter to find records. For example, filter on the Purchase requisition fION()</StackTrace> </Error> Playback results: Tests: 2 Passed: 1 Failed: 1
Expected result:
Starting ChromeDriver 78.0.3904.70Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
Test 229:
Passed
Test 260:
Failed. Error message: Status: Test case failed. Steps: Navigate to: PurchReqTableListPage (purchreqpreparedbyme) Use the Quick Filter to find records. For example, filter on the Purchase requisition fION()</StackTrace> </Error> Playback results: Tests: 2 Passed: 1 Failed: 1
On this site : https://regexr.com/3c0lf I tried this regex and the groups captured were : Test 260: and Test 229: (which is exactly what I want)
I do not understand where the 0 and the 9 comes from.
Thanks a lot
Those are the last digits of the number. 0 from 26*0* and 9 from 22*9*.
You are seeing those because you've created an additional capturing group by putting parentheses around the digits. Just remove them like so:
$string -Split '(Test \d+:)
You probably don't even need those parentheses either, leaving just
$string -Split 'Test \d+:

Unit testing in vuejs

I am trying to configure/run my first unit test for Vuejs. But I can't get past the configuration issues. I have tried installing the libraries but for some reason I keep getting errors.
Here is what an example of my code looks like:
My directory structure:
hello/
dist/
node_modules/
src/
components/
hello.vue
test/
setup.js
test.spec.js
.babelrc
package.json
webpack.config.js
Contents inside my files
src/components/hello.vue
<template> <div> {{message}} </div> </template>
<script>
export default {
name: 'hello',
data () { return message: 'Hi' },
created () {
// ...
}
}
test/setup.js
// setup JSDOM
require('jsdom-global')()
// make expect available globally
global.expect = require('expect')
test/test.spect.js
import { shallow } from 'vue/test-utils'
import { hello} from '../../../src/components/hello.vue'
describe('hello', () => {
// just testing simple data to see if it works
expect(1).toBe(1)
})
.babelrc
{
"env": {
"development": {
"presets": [
[
"env",
{
"modules": false
}
]
]
},
"test": {
"presets": [
[
"env",
{
"modules": false,
"targets": {
"node": "current"
}
}
]
],
"plugins": [
"istanbul"
]
}
}
}
package.json
...
"scripts": {
"build": "webpack -p",
"test": "cross-env NODE_ENV=test nyc mocha-webpack --webpack-config webpack.config.js --require test/setup.js test/**/*.spec.js"
},
"devDependencies": {
"babel-core": "^6.26.0",
"babel-loader": "^7.1.2",
"babel-preset-env": "^1.6.1",
"cross-env": "^5.1.1",
"css-loader": "^0.28.7",
"file-loader": "^1.1.5",
"node-sass": "^4.7.2",
"sass-loader": "^6.0.6",
"vue-loader": "^13.5.0",
"vue-template-compiler": "^2.5.9",
"webpack": "^3.10.0",
"webpack-dev-server": "^2.9.7",
"jsdom": "^11.3.0",
"jsdom-global": "^3.0.2",
"mocha": "^3.5.3",
"mocha-webpack": "^1.0.0-rc.1",
"nyc": "^11.4.1",
"expect": "^21.2.1",
"#vue/test-utils": "^1.0.0-beta.12"
},
...
"nyc": {
"include": [
"src/**/*.(js|vue)"
],
"instrument": false,
"sourceMap": false
}
and finally my webpack.config.js
...
if(process.env.NODE_ENV == "test") {
module.exports.externals = [ require ('webpack-node-externals')()]
module.exports.devtool = 'inline-cheap-module-source-map'
}
now when I run npm test from my root folder hello/ I get this error:
> hello#1.0.0 test C:\Users\john\vue-learn\hello
> npm run e2e
> hello#1.0.0 e2e C:\Users\john\vue-learn\hello
> node test/e2e/runner.js
Starting selenium server... started - PID: 12212
[Test] Test Suite
=====================
Running: default e2e tests
× Timed out while waiting for element <#app> to be present for 5000 milliseconds. - expected "visible" but got: "not found"
at Object.defaultE2eTests [as default e2e tests] (C:/Users/john/Google Drive/lab/hello/test/e2e/specs/test.js:13:8)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
FAILED: 1 assertions failed (20.281s)
_________________________________________________
TEST FAILURE: 1 assertions failed, 0 passed. (20.456s)
× test
- default e2e tests (20.281s)
Timed out while waiting for element <#app> to be present for 5000 milliseconds. - expected "visible" but got: "not found"
at Object.defaultE2eTests [as default e2e tests] (C:/Users/john/Google Drive/lab/hello/test/e2e/specs/test.js:13:8)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! hello#1.0.0 e2e: `node test/e2e/runner.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the hello#1.0.0 e2e script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\john\AppData\Roaming\npm-cache\_logs\2018-04-03T23_53_15_976Z-debug.log
npm ERR! Test failed. See above for more details.
I don't know why this happens. When I installed my webpack project at first I didn't install a testing library with the npm init command so there are no conflicts, but still I get that error:
update (after bounty)
I'm just trying to test my vuejs application. Hopefully with jasmine/karma. If anyone knows how to integrate these into a simple app and run the firsts test, I can take it from there. My problem is not writing tests but configuring it
So first thing you didn't need to enable the end to end testing in your project. I would say start fresh
$ npm install -g vue-cli
$ vue init webpack vue-testing
? Project name vue-testing
? Project description A Vue.js project
? Author Tarun Lalwani <tarun.lalwani#payu.in>
? Vue build standalone
? Install vue-router? Yes
? Use ESLint to lint your code? Yes
? Pick an ESLint preset Standard
? Set up unit tests Yes
? Pick a test runner karma
? Setup e2e tests with Nightwatch? No
? Should we run `npm install` for you after the project has been created? (recommended) yarn
Say N to Setup e2e tests with Nightwatch and use Karma for the Pick a test runner.
$ npm test
> vue-testing#1.0.0 test /Users/tarun.lalwani/Desktop/tarunlalwani.com/tarunlalwani/workshop/ub16/so/vue-testing
> npm run unit
> vue-testing#1.0.0 unit /Users/tarun.lalwani/Desktop/tarunlalwani.com/tarunlalwani/workshop/ub16/so/vue-testing
> cross-env BABEL_ENV=test karma start test/unit/karma.conf.js --single-run
07 04 2018 21:35:28.620:INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9876/
07 04 2018 21:35:28.629:INFO [launcher]: Launching browser PhantomJS with unlimited concurrency
07 04 2018 21:35:28.645:INFO [launcher]: Starting browser PhantomJS
07 04 2018 21:35:32.891:INFO [PhantomJS 2.1.1 (Mac OS X 0.0.0)]: Connected on socket M1HeZIiOis3eE3mLAAAA with id 44927405
HelloWorld.vue
✓ should render correct contents
PhantomJS 2.1.1 (Mac OS X 0.0.0): Executed 1 of 1 SUCCESS (0.061 secs / 0.041 secs)
TOTAL: 1 SUCCESS
=============================== Coverage summary ===============================
Statements : 100% ( 2/2 )
Branches : 100% ( 0/0 )
Functions : 100% ( 0/0 )
Lines : 100% ( 2/2 )
================================================================================
Now your npm test would work fine.
According to the error logs you provide here, the failing tests that you spot are the End to End ones. Indeed, by executing the command npm test e2e you're testing using Nightwatch. See under /tests/e2e/specs. Here you should have a default test file checking that your Vue application properly create a DOM element identified as app.
The test should be the following:
// For authoring Nightwatch tests, see
// http://nightwatchjs.org/guide#usage
module.exports = {
'default e2e tests': function (browser) {
// automatically uses dev Server port from /config.index.js
// default: http://localhost:8080
// see nightwatch.conf.js
const devServer = browser.globals.devServerURL
browser
.url(devServer)
.waitForElementVisible('#app', 5000)
.assert.elementPresent('.hello')
.assert.containsText('h1', 'Welcome to Your Vue.js App')
.assert.elementCount('img', 1)
.end()
}
}
In your case this test is failing because you have probably removed the file named App.vue that is generated through vue-cli scaffolding. The error you get is because the above test checks, with a 5 seconds timeout, if a DOM node named "app" is rendered (i.e.: .waitForElementVisible('#app', 5000)).
Basically it is failing because you actually do not provide this div in your application anymore (due of App.vue removal, maybe).
So you have two options here:
restoring the App.vue file (i.e.: create a div identified as 'app' where you mount a Vue instance);
editing the end to end according to your needs.
Hope this helps!

Running only changed or failed tests with CMake/CTest?

I work on a large code base that has close to 400 test executables, with run times varying between 0.001 second and 1800 seconds. When some bit of code changes CMake will rebuild intelligently only the targets that have changed, many times taking shorter than the actual test run will take.
The only way I know around this is to manually filter on tests you know you want to run. My intuition says that I would want to re-run any test suite that does not have a successful run stored - either because it failed, or because it was recompiled.
Is this possible? If so, how?
ctest command accepts several parameters, which affects on set of tests to run. E.g. ,"-R" - filter tests by name, "-L" - filter tests by label. Probably, using dashboard-related options, you may also choose tests to run.
As for generating values for these options according to changed executables, you may write program or script, which checks modification time of executables and/or parses last log file for find failed tests.
Another way for run only changed executables is to wrap tests into additional script. This script will run executable only if some condition is saticfied.
For Linux wrapper script could be implemented as follows:
test_wrapper.sh:
# test_wrapper.sh <test_name> <executable> <params..>
# Run executable, given as second argument, with parameters, given as futher arguments.
#
# If environment variable `LAST_LOG_FILE` is set,
# checks that this file is older than the executable.
#
# If environment variable LAST_LOG_FAILED_FILE is set,
# check that testname is listed in this file.
#
# Test executable is run only if one of these checks succeed, or if none of checks is performed.
check_succeed=
check_performed=
if [ -n $LAST_LOG_FILE ]; then
check_performed=1
executable=$2
if [ ! ( -e "$LAST_LOG_FILE" ) ]; then
check_succeed=1 # Log file is absent
elif [ "$LAST_LOG_FILE" -ot "$executable" ]; then
check_succeed=1 # Log file is older than executable
fi
fi
if [ -n "$LAST_LOG_FAILED_FILE" ]; then
check_performed=1
testname=$1
if [ ! ( -e "$LAST_LOG_FAILED_FILE" ) ]; then
# No failed tests at all
elif grep ":${testname}\$" "$LAST_LOG_FAILED_FILE" > /dev/null; then
check_succeed=1 # Test has been failed previously
fi
fi
if [ -n "$check_performed" -a -z "$check_succeed" ]; then
echo "Needn't to run test."
exit 0
fi
shift 1 # remove `testname` argument
eval "$*"
CMake macro for add wrapped test:
CMakeLists.txt:
# Similar to add_test(), but test is executed with our wrapper.
function(add_wrapped_test name command)
if(name STREQUAL "NAME")
# Complex add_test() command flow: NAME <name> COMMAND <command> ...
set(other_params ${ARGN})
list(REMOVE_AT other_params 0) # COMMAND keyword
# Actual `command` argument
list(GET other_params 0 real_command)
list(REMOVE_AT other_params 0)
# If `real_command` is a target, need to translate it to path to executable.
if(TARGET real_command)
# Generator expression is perfectly OK here.
set(real_command "$<TARGET_FILE:${real_command}")
endif()
# `command` is actually value of 'NAME' parameter
add_test("NAME" ${command} "COMMAND" /bin/sh <...>/test_wrapper.sh
${command} ${real_command} ${other_params}
)
else() # Simple add_test() command flow
add_test(${name} /bin/sh <...>/test_wrapper.sh
${name} ${command} ${ARGN}
)
endif()
endfunction(add_wrapped_test)
When you want to run only those tests, which executables have been changed since last run or which has been failed last time, use
LAST_LOG_FILE=<build-dir>/Testing/Temporary/LastTest.log \
LAST_FAILED_LOG_FILE=<build-dir>/Testing/Temporary/LastTestsFailed.log \
ctest
All other tests will be automatically passed.

Rails 4.1, Guard 2.10, and Minitest 5.4.1 -- RuntimeError & Rails::Generators::TestCase

I'm trying to set up Guard with Minitest in a new Rails 4 project. I updated my Gemfile with the following:
group :development do
gem 'guard'
gem 'guard-minitest'
end
And then ran bundle exec guard init minitest.
I've got a pretty simple test like so:
require 'test_helper'
describe ClassToBeTested do
describe "#initialize" do
it "should return a ClassToBeTested object" do
obj = ClassToBeTested.new
obj.must_be_kind_of ClassToBeTested
end
end
end
The class being tested is in app/services/class_to_be_tested.rb.
When I run bundle exec guard -n f I get the following:
11:35:43 - INFO - Guard::Minitest 2.3.2 is running, with Minitest::Unit 5.4.1!
11:35:43 - INFO - Running: all tests
Run options: --seed 36837
# Running:
E
Finished in 0.005190s, 192.6728 runs/s, 0.0000 assertions/s.
1) Error:
ClassToBeTested::#initialize#test_0001_should return a ClassToBeTested object:
RuntimeError: You need to configure your Rails::Generators::TestCase destination root.
1 runs, 0 assertions, 0 failures, 1 errors, 0 skips
11:35:45 - INFO - Guard is now watching at '/home/sean/Code/Ruby/work/project'
Is there something I'm missing? Something in Guard/Minitest/Rails that needs to be configured to work properly?
Figured it out -- apparently it thought that the class was a generator. Adding the following line to test_helper.rb inside the class TestCase block fixed the issue:
register_spec_type(/ClassToBeTested/, Minitest::Spec)
Is there a way to have all the classes in test/services/ be subclassed under Minitest::Spec, and not the Generator test class?