How do I run some config before every Jest test run - unit-testing

I'm writing tests for a React application which makes use of Fluxxor to provide an event dispatcher. Making that work requires telling Jest not to mock a few modules which are used internally, and are provided by Node itself.
That means I can't just add them to the unmockedModulePathPatterns config key, and instead have to use some code like this:
[ 'util', 'events' ].forEach(function (module) {
jest.setMock(module, require.requireActual(module));
});
However, I can't find anywhere useful to put it. I've got a setupEnvScriptFile which sets up a few globals that I use in almost all my tests, but the jest object doesn't seem to be available in that context, so I can't just set the mocks there.
As a hacky stopgap measure I've wrapped the code above in a function which I call at the beginning of any describe blocks testing Fluxxor stores, but its far from ideal.

Have you tried config.setupTestFrameworkScriptFile? Seems like it would be the right place to monkey patch the api, as per the docs.

It seems that the answer, at least currently, is "you can't in this case", but there are issues open for the two changes that need to be made to support it.
https://github.com/facebook/jest/issues/106
https://github.com/facebook/jest/issues/107

FWIW, Here's a solution that we have been using to add Fluxxor and React-Router support to our test specs.
https://gist.github.com/adjavaherian/a15ef0461e65d58aacd2

Related

Ember Testing: Why use Assert.?

I see a lot of examples (including ember-cli generated tests) that use assert.function() but I can use the function as is, so am I doing something wrong, or do examples just show not-really-necessary qualifiers?
For example, either of these work in a new generated unit test:
assert.expect(1);
expect(1);
Why ever do the first one if the second one works?
This is actually a QUnit change, not an Ember one. QUnit is changing their API as they move towards 2.0. You can use the global versions now, but they'll be removed in 2.0 so it's probably a good idea to use the assert.* versions now so you don't have to change the code later.

Create a simple unit tests framework from scratch in Coldfusion

I know there are existing tools for testing a ColdFusion application (MXUnit, MockBox), but I'm creating a custom tool, so that it will require less configuration.
When I run a unit test file, it's done via a generic 'model' which retrieves all functions from the unit test file. Within each test function, I have to call assertEquals -- but these functions are in the model, so I cannot access them.
I tried by passing the model itself to the unit test file so it can call the models functions directly but it doesn't work and it adds logic to the test file, which I don't like.
I can also extend the model in the test file but I will have to call directly the test file, call super.init(this) so the model can fetch test functions, etc..
Is there a way to achieve this kind of process? What's the best option?
In answer to your question, it sounds like you want to inject variables / methods into the subject under test. You can do it like so:
myInstance["methodName"] = myFunction;
You can then call the injected method like so:
myInstance.myFunction();
Both MXUnit and TestBox use this technique.
Having said that I don't quite understand why you want to re-invent the wheel. TestBox is an excellent, proven testing framework which has a wealth of features which would take you an incredible amount of time to replicate. I'm not quite sure what the configuration issue you have could be - it really doesn't require very much setup. Maybe it might be worth asking how to setup and use TestBox rather than how to build your own testing solution :)
There is a good book (which is available in a free version) which you can read on TestBox here : http://testbox.ortusbooks.com/
Good luck!

TDD with an MP3 library

I'm trying to learn TDD, and my first project is a PHP-based project to help me organise my somewhat small MP3 collection. I'm aware that there are many other, much better solutions out there, but this is simply for the sake of getting to grips with TDD.
I require a method that will accept a filename, and return the duration of the MP3 using a command line call to ffmpeg. Is it possible to test this method short of pointing it to a real MP3? Should I just test the simple things, such as whether the file even exist? Should I bother testing it at all?
I'd apprecate your thoughts.
Many thanks in advance.
EDIT: I'm sorry for not mentioning that the call to ffmpeg is not via an class or API as such, but via the CLI.
$output = shell_exec("{$ffmpeg_exe} -i \"{$file_path}\" 2>&1");
This is where I'm having trouble testing. I don't think there's any way to mock this without using Runkit, which I'd like to avoid, since it can't be installed via Composer, and I'd like to avoid dependencies that need to be compiled.
It depends on what you want to call a "unit test" :)
In my opinion, a unit test should not depend on anything outside of the class under test. It definitely should not be making network requests, database calls, or touching the file system.
The test you describe is a test I would call an integration or acceptance test-- and writing a failing acceptance test is often the start to my personal TDD cycle. Then, if appropriate, I break that down into smaller pieces and write a failing unit test, and cycle red-green-refactor on multiple unit tests until the acceptance test is passing.
Another way to make your tests more isolated is to use test doubles (the general term for mocks, stubs, fakes, spies, etc). This might be the best approach in your case-- have a mock object that acts like an object would that interacts with the file system, but that you can control to act in a certain way. I would have an argument to pass this in, and if nothing is passed in for this, use the built in file interaction libraries.
I haven't done PHP in a while, but here's some pseudocode:
function duration(filename, filesystem)
filesystem = PHP File interacting object if filesystem isn't passed in
file = filesystem.find(filename)
return file.duration
That way, in your tests you can do something like:
test that the duration function returns the duration of the file whose name is passed in
fake_file = stub that returns 3:08 when we call duration on it
fake_filesystem = stub that returns fake_file when we call find on it with a filename
assert_equal duration("some_filename.mp3", fake_filesystem), 3:08
Again, super pseudocode!!! Best of luck!!
Response to your edit:
Aaahhh I see. I assume you're then parsing the $output to extract the information you want and to handle any errors that might be returned by running this command. That, IMO, is the interesting part of your code that should be tested. So I'd probably put the shell_exec within a function that I would then not bother testing, but in the tests for your parsing logic, I would stub the return value of this function to be able to test what you do when you get various output.
I'd also have at least one integration or acceptance test that would actually call shell_exec and actually need a file present at the $file_path.
This shell_exec line is basically doing string concatenation, so it's not really worth testing. TDD is not so much a rule as a guideline, and sometimes there are times you have to decide not to test :)
You can stub ffmpeg to not use the real files.

How to unit test with jasmine and browserify?

Any best way to run the jasmine HTML reporter with browserify styled code? I also want to be able to run this headless with phantomjs, thus the need for the HTML reporter.
I've created a detailed example project which addresses the jasmine testing (and others) - see https://github.com/amitayd/grunt-browserify-jasmine-node-example. Discussion at my blog post
The approach in this aspect was to create a Browserify bundle for the main source code (where all the modules are exposed), and one for tests which relies on external for the main source code. Then tests can be run both in PhantomJS or a real browser.
I don't think there's a jasmine-browserify package yet, and it doesn't really match Browserify/NPM's way of doing things (avoid global exports).
For now, I just include /node_modules/jasmine-reporters/ext/jasmine.js and jasmine-html.js at the top of my <head>, and require all my specs in a top-level spec_entry.js that I then use as the entry point for a Browserify bundle that I put right afterwards in the <head>. (Note that if the entry point is not top-level, you'll have a bad time due to a long-lasting, gnarly bug in Browserify).
This plays nicely with jasmine-node as long as you don't assume the presence of a global document or window. However, you do have to remember to register your specs in that spec_entry.js, unless you want to hack Browserify to get it to crawl your directories for .spec.js files.
I'd be very interested in a more elegant solution, though, that would transparently work with jasmine-node and browserify.
If you use grunt-watchify, no need to create spec_entry.js. Just use require in your specs, and then bundle your specs with grunt-watchify:
watchify: {
test: {
src: './spec/**/*Spec.js',
dest: 'spec/spec-bundle.js'
}
},
jasmine: {
test: {
options: {
specs: 'spec/spec-bundle.js'
}
}
},
Then run your tests with
grunt.registerTask('test', ['watchify:test','jasmine:test']);
As all above answers are little outdated (of course it doesn't mean that they are not working any more etc.) I would like to point to https://github.com/nikku/karma-browserify this is a preprocessor for karma runner. It combines test files with all required dependencies. Such created browserify bundle is passed to karma which base on configuration runs it. Please be aware that you can choose any modern test framework (jasmin,mocha...) and browsers (phantom, chrome ..) Probably this is exactly what you need :)
You may also want to look into Karma. It really simple to setup and it will watch for changes and rerun your test. Check out this sample project that uses Karma to test a browserify/react project. You just need to add a few dependancies and create a karma.conf.js file.
https://github.com/TYRONEMICHAEL/react-component-boilerplate

How can I determine the name of my unit test before its execution?

I was using MSTest and all was fine. Not long ago, I needed to write a large number of data driven unit tests.
Moreover, I needed to know the name of the test just before I run it, so I could populate the data sources with the correct parameters (that were fetched from an external remote service).
Nowhere in MSTest could I find a way to get the name of the tests that are about to run before their actual execution. At this point it was, of course, already too late, since the data sources were already populated.
What I need is to know the names of the test that are about to execute so I could configure their data sources in advance, before their execution.
Somebody suggested I "check out NUnit". I am completely clueless about NUnit. For now I have started reading its documentation but am still at a loss. Have you any advice?
If you really need the test's name -- It's not well documented, but NUnit exposes a feature that let's you get access to the current test information:
namespace NUnitOutput.Example
{
using NUnit.Framework;
[TestFixture]
public class Demo
{
[Test]
public void WhatsMyName()
{
Console.WriteLine(TestContext.CurrentContext.Test.FullName);
Console.WriteLine(TestContext.CurrentContext.Test.Name);
}
}
}
Provides:
NUnitOutput.Example.Demo.WhatsMyName
WhatsMyName
Note this feature isn't guaranteed to implemented by custom TestRunners, like ReSharper. I have tested this in NUnit 2.5.9 (nunit.exe and nunit-console.exe)
However, re-reading your question I think you should check out is the TestCaseSource or TestCase attribute that can be used to parameterize your tests.
If I'm understanding your problem correctly, you want to get the name of the currently-running test so that you can use it as a key to look up a set of data with which to populate the data sources used by the code under test. Is that right?
If that's the case then I don't think you need to look for special functionality in your unit testing framework. Why not use the Reflection API to fetch the name of the currently-executing method? System.Reflection.MethodBase.GetCurrentMethod() will get you a MethodBase object representing the method, and that has a Name property.
However, I'd suggest that using the method name as a key for looking up the appropriate test data is a bad idea. You'd be coupling the name of the method to your data set, and that seems like a recipe for fragile code to me. You need to be remain free to refactor your code and the names of methods without worrying about whether that will break the database lookup behind-the-scenes.
As an alternative, why not consider creating a custom Attribute that you can use to mark those test methods that need a database lookup, and using a property on that attribute to hold the key?
For things like this you should rely on a fixture to initialize the state you want before you run the test.
The simplest way which works in (any) testing framework is to create a fixture which loads any data given a data identifier (string). Then in each test case you just provide the test string for data lookup for the data you want in that test.
Aside from this, it's not recommended to have unit tests access files and other external resources because it means slower unit tests and higher probability of failure (as you're relying on something outside the in-memory code). This of course depends on the amount of data you have and the type of testing you're doing, but I generally have the data compiled-in for unit tests.