I sometimes see that tests are taged as :pending
ExUnit.start
ExUnit.configure(exclude: :pending)
defmodule SublistTest do
use ExUnit.Case, async: true
test "empty equals empty" do
assert Sublist.compare([], []) == :equal
end
#tag :pending
test "empty is a sublist of anything" do
assert Sublist.compare([], [nil]) == :sublist
end
end
Obviously there are excluded from execution, when you run tests from shell
elixir sublist_test.exs
Is there a way to include :pending test when running tests from command line?
And second question: Why people tag tests as :pending.
You can do this with the mix test task inside a Mix project. Mix projects are super simple to set up:
$ mix new sublist
You can specify your default exclusions in test/test_helper.exs:
ExUnit.start()
ExUnit.configure(exclude: :pending)
then you can write your test in test/sublist_test.exs. To run the tests, do
$ mix test
and to include pending tests as well do
$ mix test --include pending
Now for your second question: people usually mark tests as pending because they are not implemented yet, but they don't want to forget about them. For example you might be on a tight deadline but want to make sure that the tests will eventually be completed. Or maybe the test is not working yet because you need to implement other things first.
If the tests were not excluded by default they would convey the wrong message: that the affected tests are red. But they are rather to do items than actual tests, so they should not fail by default.
Related
It's possible to create a GoogleTest filter without wildcard in it ?
I have a test suite TestSuiteA
TEST_P(TestSuiteA, Blabla) {
// ...
}
INSTANTIATE_TEST_CASE_P(Apple, TestSuiteA, testing::Combine(...));
INSTANTIATE_TEST_CASE_P(Ananas, TestSuiteA, testing::Combine(...));
And a second test suite TestSuiteB
TEST_P(TestSuiteB, Kokoko) {
// ...
}
INSTANTIATE_TEST_CASE_P(Apple, TestSuiteB, testing::Combine(...));
INSTANTIATE_TEST_CASE_P(Ananas, TestSuiteB, testing::Combine(...));
I know precisely which tests I want to launch, so I don't want use the wildcard.
My filter
Apple/TestSuiteA:Apple/TestSuiteB
But it doesn't work -> 0 tests to launch
But with this
Apple/TestSuiteA.*:Apple/TestSuiteB.*
It works
What does this .* mean and why it doesn't work without ?
Thanks
Each tests needs to have a unique name, and you haven't provided the exact correct name. That's why no tests were run. Make sure you know how exactly the tests are named - you can use --gtest_list_tests to list all the test cases defined in a given test executable. Then use the exact names with --gtest_filter separated with :. To read more how to run multiple tests, see gtest advanced documentation.
pytest allows the creation of fixtures that are automatically applied to every test in a test suite (via the autouse keyword argument). This is useful for implementing setup and teardown actions that affect every test case. More details can be found in the pytest documentation.
In theory, the same infrastructure would also be very useful for verifying post-conditions that are expected to exist after each test runs. For example, maybe a log file is created every time a test runs, and I want to make sure it exists when the test ends.
Don't get hung up on the details, but I hope you get the basic idea. The point is that it would be tedious and repetitive to add this code to each test function, especially when autouse fixtures already provide infrastructure for applying this action to every test. Furthermore, fixtures can be packaged into plugins, so my check could be used by other packages.
The problem is that it doesn't seem to be possible to cause a test failure from a fixture. Consider the following example:
#pytest.fixture(autouse=True)
def check_log_file():
# Yielding here runs the test itself
yield
# Now check whether the log file exists (as expected)
if not log_file_exists():
pytest.fail("Log file could not be found")
In the case where the log file does not exist, I don't get a test failure. Instead, I get a pytest error. If there are 10 tests in my test suite, and all of them pass, but 5 of them are missing a log file, I will get 10 passes and 5 errors. My goal is to get 5 passes and 5 failures.
So the first question is: is this possible? Am I just missing something? This answer suggests to me that it is probably not possible. If that's the case, the second question is: is there another way? If the answer to that question is also "no": why not? Is it a fundamental limitation of pytest infrastructure? If not, then are there any plans to support this kind of functionality?
In pytest, a yield-ing fixture has the first half of its definition executed during setup and the latter half executed during teardown. Further, setup and teardown aren't considered part of any individual test and thus don't contribute to its failure. This is why you see your exception reported as an additional error rather than a test failure.
On a philosophical note, as (cleverly) convenient as your attempted approach might be, I would argue that it violates the spirit of test setup and teardown and thus even if you could do it, you shouldn't. The setup and teardown stages exist to support the execution of the test—not to supplement its assertions of system behavior. If the behavior is important enough to assert, the assertions are important enough to reside in the body of one or more dedicated tests.
If you're simply trying to minimize the duplication of code, I'd recommend encapsulating the assertions in a helper method, e.g., assert_log_file_cleaned_up(), which can be called from the body of the appropriate tests. This will allow the test bodies to retain their descriptive power as specifications of system behavior.
AFAIK it isn't possible to tell pytest to treat errors in particular fixture as test failures.
I also have a case where I would like to use fixture to minimize test code duplication but in your case pytest-dependency may be a way to go.
Moreover, test dependencies aren't bad for non-unit tests and be careful with autouse because it makes tests harder to read and debug. Explicit fixtures in test function header give you at least some directions to find executed code.
I prefer using context managers for this purpose:
from contextlib import contextmanager
#contextmanager
def directory_that_must_be_clean_after_use():
directory = set()
yield directory
assert not directory
def test_foo():
with directory_that_must_be_clean_after_use() as directory:
directory.add("file")
If you absoulutely can't afford to add this one line for every test, it's easy enough to write this as a plugin.
Put this in your conftest.py:
import pytest
directory = set()
# register the marker so that pytest doesn't warn you about unknown markers
def pytest_configure(config):
config.addinivalue_line("markers",
"directory_must_be_clean_after_test: the name says it all")
# this is going to be run on every test
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
directory.clear()
yield
if item.get_closest_marker("directory_must_be_clean_after_test"):
assert not directory
And add the according marker to your tests:
# test.py
import pytest
from conftest import directory
def test_foo():
directory.add("foo file")
#pytest.mark.directory_must_be_clean_after_test
def test_bar():
directory.add("bar file")
Running this will give you:
fail.py::test_foo PASSED
fail.py::test_bar FAILED
...
> assert not directory
E AssertionError: assert not {'bar file'}
conftest.py:13: AssertionError
You don't have to use markers, of course, but these allow controlling the scope of the plugin. You can have the markers per-class or per-module as well.
I have a large suite of tests that are currently running in series. I would like to make the suite parallelizable as much as possible. One big problem I have is that some tests require certain global application state to be set one way, and some require it to be set another way. Is there a way I can make the NUnit test runner group my tests based on which global state they require, and then run the tests within each group in parallel with each other, and modify the global state between groups?
For example, let's say there is a global setting Foo and a custom attribute [RequiresFoo(X)] that can be used to annotate which Foo value a test requires. At runtime I want NUnit to group all tests by their argument to RequiresFoo, counting unmarked tests as having some default Foo value.
Then for each group, I want it to
Set Foo = N where N is the Foo value for that group.
Run all tests in that group in parallel.
In an ideal world I would have a mock system for this global state, but that would take a lot of time that I don't have right now. Can I get NUnit to do this or something like it?
Note, I need to be able to execute any method between groups, not just set a variable. The global context I'm actually dealing with can involve starting or stopping microservices, updating configuration files, etc. I can serialize any of these requirements to a string to pass to a custom attribute, but at runtime I need to be able to run arbitrary code to parse the requirements and reconfigure the environment.
Here is a pseudo-code example.
By default tests execute in series like this:
foreach (var t in allTests)
{
Run(t);
}
NUnits basic parallel behavior is like this:
Parallel.ForEach(allTests, t => Run(t));
I want something like this:
var grouped = allTests
.GroupBy(t => GetRequirementsFromCustomAttributes(t));
foreach (var g in grouped)
{
SetGlobalState(g.Key);
Parallel.ForEach(g, t => Run(t));
}
As you state the problem, it's not yet possible in NUnit. There is a feature planned but not yet implemented that would allow arbitrary grouping of tests that may not run together.
Your workaround is to make each "group" a test, since NUnit only allows specification of parallelization on tests. Note that by test, we mean either a test case or a group of tests, i.e. fixture or namespace suite.
Putting [Parallelizable] on a test anywhere in the hierarchy causes that test to run in parallel with other tests at the same level. Putting [NonParallelizable] on it causes that same test to run in isolation.
Let's say you have five fixtures that require a value of Foo. You would make each of those fixtures non-parallelizable. In that way, none of them could run at the same time and interfere with the others.
If you want to allow those fixtures to run in parallel with other non-Foo fixtures, simply put them all in the same namespace, like Tests.FooRequred.
Create a SetupFixture in that namespace - possibly a dummy without any actual setup action. Put the [Parallelizable] attribute on it.
The group of all foo tests would then run in parallel with other tests while the individual fixtures would not run in parallel with one another.
I am trying to write test for testing sphinx search. I need user to pass some params to api, and based on that params shpinx to perform search.
I have following 3 test
in test_helper.rb I have everything set up
require 'factory_girl'
require 'thinking_sphinx/test'
ThinkingSphinx::Test.init
.........
class ActiveSupport::TestCase
self.use_transactional_fixtures = false
self.use_instantiated_fixtures = false
fixtures :all
And my tests
test "should 1st test" do
ThinkingSphinx::Test.start
# uthenticatoin and creating records in databese with Factory Girl
ThinkingSphinx::Test.index
get "some/path", {params}, #headers
assert_response :success
end
test "should 2nd test" do
# uthenticatoin and creating records in databese with Factory Girl
ThinkingSphinx::Test.index
get "some/path", {params}, #headers
assert_response :success
# other assertions
end
test "should 3rd test" do
# uthenticatoin and creating records in databese with Factory Girl
ThinkingSphinx::Test.index
get "some/path", {params}, #headers
assert_response :success
# other assertions
ThinkingSphinx::Test.stop
end
I dont know why my tests run not in order they are written, but 2nd, 3rd, 1st
How can I make tests run in order thay are written. I am using basic Rails Test::Unit.
Order matters for me, because of test specifics.
thanks.
Your tests should never be written in such a way that order matters. I see why you want the order though, and there are ways to deal with that. Try this:
setup do
ThinkingSphinx::Test.start
end
# Your tests
teardown do
ThinkingSphinx::Test.stop
end
This makes it so that before each test ThinkingSphinx::Test is started, and stopped after each test. This is the ideal way to set this up, so now it doesn't matter what order your tests are run in.
However, if ThinkingSphinx::Test.start is a long process, you may not want it running for each test. I don't know if TestUnit gives you the ability to run a setup before an entire suite or set of tests, but in RSpec you're able to do that, and that would serve you better.
Suppose I have class Car with following methods:
LoadGasoline(IFuel gas)
InsertKey(IKey key)
StartEngine()
IDrivingSession Go()
the purpose of Car is to configure and return an IDrivingSession which the rest of the application uses to drive the car. How do I unit-test my Car?
It looks like it requires a sequence of operations done before I can call Go() method. But I want to test each method separately as they all have some important logic. I don't want to have bunch of unit-tests like
Test1: LoadGasoline, Assert
Test2: LoadGasoline, InsertKey, Assert
Test3: LoadGasoline, InsertKey, StartEngine, Assert
Test4: LoadGasoline, InsertKey, StartEngine, Go, Assert
Isn't there a better way to unit-test sequential logic or is this a problem with my Car design?
--- EDIT ----
Thanks for all the answers. As many noticed, I should also have tests for invalid scenarios and I have those too, but this question is focused on how to test the valid sequence.
I think each method should be tested separately and independently.
IMHO, you should prepare the environment for each case, so only the LoadGasoline test will break if you change the LoadGasoline method, and you won't need to see all the tests break because of a single bug.
I don't know how the state of your Car looks like, but, before the InsertKey, you should prepare with a method like, car.SetTotalGasoline(20); or whatever variable is set in this method, but not depend on a complex logic of the method LoadGasoline.
You will later need a test (in this case, not a unit test) to test all the sequence.
Some unit testing frameworks let you specify set-up code which runs before the actual test starts.
This allows you to get the target object into the proper state before running your test. That way your test can pass or fail based on the specific code you're testing rather than on the code needed before you can run a test.
As a result, your test sequence will wind up something like this:
Test1:
LoadGasoline, Assert
Test2 Setup:
LoadGasoline
Test2:
InsertKey, Assert
Test3 Setup:
LoadGasoline, InsertKey
Test3:
StartEngine, Assert
Test4 Setup:
LoadGasoline, InsertKey, StartEngine
Test4:
Go, Assert
Realistically speaking, since the tests are all run in sequence there's no chance of Test's Setup failing if the previous test passes.
With that said, you should also test failure cases that aren't expected to work but that's a different issue.
Why don't you want all those tests?
Go has very different behavior if you call it before or after, say, InsertKey, right? So you ought to be testing both behaviors, in my opinion.
Its a fair reluctance but sometimes thats what you need to do. If you can't fake out the system under test so it thinks its in a later state, then you need to go through the same steps to get it into that state. Without knowing more about what your testing its not clear how you could fake out the different states.
One way you can make this tolerable is use an extract method refactoring on tests for one state so that same code can be used to prepare the next test.
I would probaly have
Test 1: LoadGasoline, Assert, InsertKey Assert, StartEngine Assert, Go Assert
Test 2: LoadGasoline, Go, Assert
Test 3: Go, Assert
Test 4: StartEngine, Go, Assert
Depending on the actual object, I would probally not try and do all permutations, but I would have a single test that hits the success track, then I would tests that hit my fringe cases.
Edit:
After some thought I might have tests like:
Start a car key that has no gas
Start a car with gas, and wrong key
Start a car with gas and right key (Test 1 above)
Push Peddle before starting car.
Technically you should use the following tests at least:
testLoadGasoline
testInsertKeyGasolineNotLoaded
testStartEngineKeyNotInserted
testGoEngineNotStarted
testGo
If you can directly view intermediate steps you can add
testInsertKeyGasolineLoaded
testStartEngineKeyInserted
Note that if you can directly set the state (which is language and design dependent), then
testInsertKeyGasolineLoaded might not actually call LoadGasoline.