I'm working with gulp and am fairly new, my gulp jshint task is as follow:
gulp.task('jshint', ()=>{
return gulp.src([`${root}/**/*.js`])
.pipe(jshint('.jshintrc'))
.pipe(jshint.reporter('jshint-stylish'))
.pipe(jshint.reporter('gulp-jshint-html-reporter', { filename: 'jshint-output.html' }))
.pipe(jshint.reporter('fail'));
});
The task is taking 5mins every time I run my build. Is there anyway to speed this up?
$root is src, it is not running through node_modules.
Thanks in advance!
Figured it out, there were some lib files being picked up that shouldn't have and as they were huge they caused jshint to take longer. flips desk
Related
I have a pretty big project with lots of modules. When I try to run a simple test like:
it('foo', () => {
console.log('Running test');
});
the command yarn run jest --runTestsByPath src/simple-test.js takes 10 seconds. Babel will use the transformed JavaScript from the cache.
After doing some profiling, I found that roughly 8 seconds of that are spent loading 32'000 files from node_modules/#babel*.
Is there a way to compile all of this into a single (big) JavaScript file once with webpack and then reuse it for all my tests?
Other than that, is it possible to see why each module is loaded (= which module imports the others) so we can maybe trim this?
Today, for some unexplained reason my jest test files started looping, resulting in a flickering terminal.
I am running jest src --watch, src being my source folder.
I followed a number of other discussions but none of them have helped solve my issue.
https://github.com/facebook/jest/issues/4635 is talking about a custom processor, but I am using a default setup.
I have tried ignoring folders.
I have ended up removing all my test files, at which point the looping stops. If I add a test file to __tests__ it matches the file but does not run a test. If I add the test file to my /src folder, it starts looping again, and it doesn't matter if the actual test passes or fails. Even if I add a fake test with a simple
describe('Test Suite', () => {
test('two plus two is four', () => {
expect(2 + 2).toBe(4)
})
})
it loops and flickers.
This is my jest setup
"jest": {
"verbose": false,
"watchPathIgnorePatterns": [
"<rootDir>/dist/",
"<rootDir>/node_modules/"
],
"globalSetup": "./jest-setup.js",
"globalTeardown": "./jest-teardown.js",
"testEnvironment": "./jest-mongo.js"
},
Does anyone know what is causing this to loop? I am not changing any files in any folder to make the --watch think it needs to run again, there are no other apps i.e. dropbox syncing the folder.
I am developing in VSCode, but the same thing happens if I test in a terminal window.
This was running fine just 5 hours ago, what went wrong?
It turns out that the jest.setup file was writing a configuration file to disk, while setting up a temporary mongoDB. If at least one of the tests used the mongoDB the looping stopped, or if I removed the setup files the looping stopped.
So my problem started when out of 30 test files, the one that connected to mongo was edited (starting the looping/flickering). In trying to solve the problem I removed all the rest of the test files, which left me with the most basic tests, but still the looping because I was still not connecting.
Still not 100% sure of the exact mechanism, but when inheriting someone else's codebase which doesn't use the default jest setup, probably best to expand jest knowledge to understand what's going on.
Basically, I want sourcemaps available to my unminified and minifies flavors of my site.css file. I'd like my end result to be:
site.css
site.min.css
site.css.map
site.css.min.map
Currently, I only get:
site.css
site.min.css
site.css.min.map
I know my gulp script is wrong, but I don't know how to fix it. I need sourcemaps to write a sourcemap to site.css before site.min.css gets created. HALP!
and Thank you
gulp.task('scss', gulp.series('bootstrap:scss', function compileScss() {
return gulp.src(['./site/assets/scss/*.scss'])
.pipe(sourcemaps.init())
.pipe(sass.sync({
outputStyle: 'expanded'
}).on('error', sass.logError))
.pipe(gulp.dest('./site/dist/css')) // outputs site.css
.pipe(postcss([autoprefixer(), cssnano()
]))
.pipe(sourcemaps.write('.'))
.pipe(rename({
suffix: '.min'
}))
.pipe(gulp.dest('./site/dist/css')) //outputs site.min.css
}));
You only need sourcemaps for your unminified version, i would introduce a NODE_ENV for doing minification and sourcemaps, then use gulp-if to see if you're in development or production environment
Alternatively you could have separate build and dev tasks.
Using process.env.NODE_ENV means you can use them in postcss.config.js files etc. too
I found this really good because it shows you how you can use a gulp.babel.js file with Gulp 4. I was using 3.9.1 being reluctant to upgrade until this week but this helped immensely with understanding the changes from v3>4.
I'm having a problem where when running my Ember tests. Once in every 3-5 tries it hits errors before running any tests. When I run in server mode I can see this output:
ReferenceError: Can't find variable: EmberENV at http://localhost:7357/3256/tests/index.html?hidepassed, line 42
ReferenceError: Can't find variable: define at http://localhost:7357/assets/test-loader-53146f185443881bff29aab3e80079e2.js, line 3
ReferenceError: Can't find variable: define at http://localhost:7357/assets/tests-a72d35574ec0d1ab014d4af21210a23a.js, line 1
When I look at the offensive files referenced, they looks like this:
/* globals requirejs, require */
(function() {
define("ember-cli/test-loader",
[],
function() {
"use strict";
var moduleIncludeMatchers = [];
var moduleExcludeMatchers = [];
function addModuleIncludeMatcher(fn) {
moduleIncludeMatchers.push(fn);
};
etc...
As I understand, define() is a function introduced by requirejs, so it seems like it's just not loading before the tests begin. Any idea why this would be, and if there is any way to ensure things are loaded in the proper order?
Other important things; this doesn't seem to be an issue with the individual tests, as deleting them, especially the first which would be hit doesn't make a difference. This looks like it started happening occasionally after a big check in, where among other things, we went from 130 to 174 tests, but nothing particularly strange seems to have been introduced. I've also tried cutting out pieces of the new code with no change, BUT if I revert to the previous version it seems to still work correctly every time. It could just be a matter of the codebase growing larger.
For versions of dependencies:
EmberCLI: 1.13.13
node: 5.4.1
PhantomJS: 2.1.1
Anything else that would be helpful to provide? Thanks.
Forgot to report back here that it has been fixed in my case.
First of all this issue was reported here: https://github.com/ariya/phantomjs/issues/14173, and it's likely caused by some inline import #import url(...) used in css.
The fix in my case is to write an alternative test runner which ignore the network request, similar to what #wagenet suggested in the above issue.
Hopefully that works for other use cases.
We had the same issue and were able to fix it by updating qunit to 1.20.0 in bower.json
"qunit": "~1.20.0",
I'm trying to troubleshoot a unit test issue.
I used to have a mostly working Maven -> PhantomJS -> Qunit setup, but it was unpredictable so I took it apart to try to fix it.
I upgraded the software:
Qunit: 1.11.0
PhantomJS: 1.8
Phantom Qunit Runner Latest: https://github.com/jquery/qunit/tree/master/addons/phantomjs
I see the web GUI working. It runs and passes all 102 tests. The console prints this:
$ phantomjs --disk-cache=false runner.js http://localhost/ui/dcx/test.html
$ Took 16ms to run 0 tests. 0 passed, 0 failed.
If I comment out the exit command in the runner, it prints the console output for QUnit.done multiple times.
$ phantomjs --disk-cache=false runner.js http://localhost/ui/dcx/test.html
$ PhantomJS successfully loaded a page
$ QUnit.done callback fired
$ Took 15ms to run 0 tests. 0 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1840ms to run 102 tests. 102 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1841ms to run 102 tests. 102 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1842ms to run 102 tests. 102 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1848ms to run 102 tests. 102 passed, 0 failed.
$ ^C
$
Looks to me like the Qunit.done callback is getting executed too soon, then multiple times.
Anyone know why that callback fires?
My test inclusions and login delay might be relevant. I use AMD modules to define tests and curl.js to bring them in. Nothing happens until the security login does:
curl(['dolla'], function($){
$.ajax({
type: 'POST',
url: '/svc/j_spring_security_check',
data: {
j_username: '7',
j_password: '7'
},
success: function() {
loadTests()
}
});
})
var loadTests = function () {
curl([
// Unit tests
'dcx/dataControls/activity.test'
, 'dcx/dataControls/eventList.test'
, 'dcx/dataControls/mapViewer.view.test'
, 'dcx/pages/deviceDetails.view.test'
, 'dcx/pages/login.test'
, 'dcx/pages/nodeProfiles.test'
, 'dcx/pages/settings.view.test'
], function() {}, function(ex) { throw new Error(ex) })
})
EDIT:
I'm down to a root cause, I think.
If you include QUnit on a blank page, it calls QUnit.begin and QUnit.done right away.
I need to delay execution of Qunit until after the security login is successful and curl has brought in my unit tests. Is there a way to delay the start of QUnit, but still keep the Qunit object available? I can't use stop() because there are many async tests that will call start().
Found the answer. You can configure QUnit to not start, then start it manually when all your tests are loaded. This prevents the duplicate calling of Qunit.done which is the root cause of this issue.
http://forum.jquery.com/topic/are-qunit-and-requirejs-compatible#14737000001967123
This is one way to do it--modify the runner to not exit if there are not test results.
https://gist.github.com/SimpleAsCouldBe/5059623
This doesn't work though--Qunit.done fires whenever the test stack is cleared. In an asynchronously loaded environment like Curl/Require.js, this can happen any time.
This is if you don't want to use require. For example, in a browser context maybe.
Having spent ages to find a method to turn loading of scripts (and stylesheets) into Promises (see here), I then found big problems with QUnit test suites starting to run before all these had loaded. Typically a handful of tests, at the start, would complain than a certain variable or class was undefined, although later tests wouldn't have that difficulty.
You can stop automatic starting by going like this:
QUnit.config.autostart = false;
... seemingly just putting it in one of several files will suffice.
To start the QUnit tests, then, you have to go QUnit.start();. But, understandably perhaps, you can't execute this from inside any code which is being run by a QUnit test. This gets complicated. In the end I did this in my app-starting code:
await this.loadInjectedFile( GLOBAL_SCRIPT );
await this.loadInjectedFile( DBFORM_SCRIPT );
await this.loadInjectedFile( UDV_SCRIPT );
await this.loadInjectedFile( REACTIVITY_SCRIPT );
console.log( '... injected files loaded' );
// to allow QUnit to start testing
window.QUnitGreenLight = true;
... strictly speaking a naughty thing to do (allowing test-related code to sneak into your app code). A more compartmentalised approach could probably be found.
Then, inline in the HTML file from where you launch your testing:
<script>
const tryToStartTesting = function(){
setTimeout( function(){
if( window.QUnitGreenLight ){
QUnit.start();
}
else {
console.log( 'QUnit green light not yet given!' );
tryToStartTesting();
};
}, 10 );
};
tryToStartTesting();
</script>
... in practice it seems to take maybe a few hundredths of a second before the green light is given.
A bit scrappy, perhaps, but it seems to work.