I'm making an attempt at setting boost's unit test framework up for myself, but due to having to use C++98, I also have to use boost 1.45. Due to this, I find I can't make use of datasets the way I'd like (have test cases that have an arity of 2 and a dataset of pairs (input_val,expected_val)). It's looking like I'll be able to get an approximation going using a global fixture (The fixtures would have to be global if I want to not have it reset every test case. This issue is documented in another post), but I dislike the idea of throwing all that into global. Does anybody know a better, possibly more elegant solution?
# provided the info I needed. If anybody else is looking for a good unit testing framework for C++98, Catch seems great so far!
Related
What strategies have Perl people used when mocking Moose objects that they will inject into other Moose objects as type-constrained attributes?
Test::MockObject::Extends doesn't seem to play well with Moose. I need the object to blessed as a specific package though so a vanilla Test::MockObject won't work. I'm sure other folks have had similar difficulty. How did you resolve it?
Extra Points for Solutions that are already on CPAN.
Well I'm not the expert on such things but the first thing I'd look at is Shaw Moore's (Sartak) Test-MockOO.
If this doesn't work for you, I'd then look at using the power of the Metaobject Protocol and starrt manually building Mock objects. Look at Class::MOP::Class and Moose::Meta::Class for how to override specific methods and/or create entire classes at runtime programatically.
If this still doesn't work for you, I'd swing past IRC and ask. The moose hevy hitters hang out there and I'm sure one of them has run into this situation.
bit of a self plug, but I wrote http://search.cpan.org/~cycles/Test-Magpie-0.05/lib/Test/Magpie.pm, maybe you'll find this useful. A mock created with this acts as any class, and does every role possible. It doesn't mock a specific object or class at all. Sadly CPAN's search is a bit rubbish, so searching for "test mock" doesn't show it in the results.
I should also mention that the documentation doesn't contain a huge amount of motivation or example code, so you may wish to check some of the tests:
http://cpansearch.perl.org/src/CYCLES/Test-Magpie-0.05/t/mockito_tutorial.t
http://cpansearch.perl.org/src/CYCLES/Test-Magpie-0.05/t/basic.t
I'm tryng to read suggestion in this blog post for testing some sagas behaviour.
The problem starts as soon as I try to use FakeBus, since it should be in Rebus.Testing namespace but it seems disappered.
Where is the error? Lately a lot of things changed in Rebus, this is one of those changes?
Yeah, as you've correctly discovered, there's no FakeBus in Rebus 2 yet.
And as you might have found out as well, there is no SagaFixture either.
The reason is that I have found that using Rebus with an in-memory transport for testing has been sufficient for my needs so far, although I have had to do my saga testing at a fairly high level.
At this point though, so many people have asked for SagaFixture and FakeBus that I have now decided to put them back.
Expect them to be in Rebus 0.99.39 on of the following days (possibly tomorrow).
I'm new to XNA game development and I have just started to write a small 3D game. I have written several unit tests to test my code but I have run into a problem. When I want to unit test modules that need to access a Model I haven't found a way to create a ContentManager with which to load Models. In a proper Game, the ContentManager is provided by the framework. In my unit tests, I would have to create it myself but I have no idea how to do this.
An alternative to load Models using a ContentManager would be to create Model objects programatically but that seems rather tedious. Another alternative would be to mock the Models using, for example Moq but that seems equally tedious.
Have anyone else encountered this problem and solved it?
Unit testing an XNA project is a common issue and one that is often discussed. Usually, the problem is due to either needing access to an instance of either Game, GraphicsDevice, or (in your case) ContentManager, and there not being any easy way of obtaining one.
You can see related discussions here, here, and here.
I believe the generally accepted practice is to re-evaluate what you are trying to test to see if you actually need these references, or if you can find a way around them.
Failing that, could your test case be sufficiently covered by playtesting?
If neither of the above apply, mocking the objects can prove to be rather difficult due to the requirements placed on them by their parent classes/interfaces, but I have heard of people doing it. I have also heard it is possible to actually create a GraphicsDevice using an invisible form, but I have not done this myself.
For my own tests, I've gone with not testing any graphical elements (Drawing, Resource Loading, etc.). It does leave a bit of a hole in my code coverage, but after spending a few days searching for ways of solving this exact problem, and not finding any answers, I decided testing my library functions (which do the majority of the work in my projects anyway) was good enough.
The code in this answer explains how to create a stand-alone instance of ContentManager.
I'm starting to go through the questions in project Euler, and I'd like to approach it with a TDD style, but I'm having trouble finding the numeric answer to the question that doesn't include the code. Is there any resource with that data so that I can make test cases that will tell me if I've solved the problem correctly?
My motivation for this is that I feel like the algorithm is the answer, not the number. If I look at someone else's code sample, it ruins the challenge of figuring out how to solve the problem.
Edit: I'm looking specifically for the number of the answer with no context or algorithm with it so that I can do something like the following. I know it's more verbose, but I'd like to be able to have a pass/fail result to tell me whether or not my algorithm is correct, rather than looking at someone else's code example to know whether I've done it correctly.
import unittest
class ProblemOneTest(unittest.TestCase):
def test_me(self):
self.assertEquals(solve_problem_one(),233168)
if __name__ == '__main__':
print "Problem 1 possible answer: %d" % solve_problem_one()
sys.exit(unittest.main())
TDD and project Euler assignments don't necessarily go well together. First and foremost, TDD won't help you solve any project Euler (PE) problems. This reminds me of that well known attempt by a guy to "solve Sudoku" by using TDD.
TDD is not a design technique. It can be very useful when applicable, but don't think of it as a silver bullet.
A PE problem usually involves some heavy computation that ends in a single number, which is the answer. To apply TDD mindfully, I recommend using it for the mathematical utilities you will develop as parts of your endeavors to solve PE problems. For example, my utils module for PE consists of functions for computing primes, splitting numbers to digits, checking for palindromes, and so on. This module has a set of tests, because these functions are general enough to be tested. The PE solutions themselves don't have tests - the only real test needed for them is to eventually generate the correct answer.
The problem page on the project Euler website has an input to check your answer. That's all I really need.
Yes, you can setup your unit tests against the test data they give.
It appears that you are using Python to solve the problems (as am I). What I do to validate the different components is to do simple 'assert' statements against the example data. It works well and there is less time overhead. Besides, you don't need to run the entire test suite when you are just needing to know if your new changes for problem 30 are correct.
Using Assertions Effectively
The unit test IS the answer.
The problems are usually so simple (not in terms of difficulty, but at least code layout) that breaking them up into various methods/classes is usually silly.
I know I'm 3 years late to the party but I thought I would share how I am approaching Project Euler via TDD.
I'm working in Python, if that matters to you.
What I do is this:
Every problem gets (at a minimum) its own function that serves as an entry/exit point, no matter how trivial or silly it may feel. Problems may also get helper functions if the problem requires some kind of functionality that you think you might need in the future.
Most Project Euler questions include a smaller demo/test problem in the test itself. This test problem illustrates what you most solve but on a smaller scale.
Plan to set up your entry/exit function with a parameter that allows the function to solve both the toy version of the problem as well as the harder full scale version. For instance, on problem 12 my (ridiculously named) entry point is get_triangle_num_with_n_or_more_divisors(n).
At this point I haven't implemented the function, just named it. Now I will write two tests for this problem: test_example and test_problem. I'll decorate test_problem with #unittest.skip('Unimplemented') for now since we don't know the answer. Your test file might look something like mine:
import unittest
from problems.p0014 import get_triangle_num_with_n_or_more_divisors
class TestHighlyDivisibleTriangleNumber(unittest.TestCase):
def test_example(self):
self.assertEquals(get_triangle_num_with_n_or_more_divisors(1),
1)
self.assertEquals(get_triangle_num_with_n_or_more_divisors(2),
3)
self.assertEquals(get_triangle_num_with_n_or_more_divisors(6),
28)
#unittest.skip('Unimplemented')
def test_problem(self):
self.assertEquals(get_triangle_num_with_n_or_more_divisors(500),
'TODO: Replace this with answer')
Now you are doing Project Euler, TDD style. You are using the example cases given to test your implementation code. Really the only trick to it is to write your implementation in a flexible enough way that it can be used to solve both the practice version and the real version.
I then sit down and write get_triangle_num_with_n_or_more_divisors. Once test_example is passing, I try to solve the real problem; if it works I update my test_problem case with the real answer and bam you've got a full blown regression test to boot.
Despite the fact that these problems are more of a challenge without an answer to steer towards, a quick google search yielded:
http://code.google.com/p/projecteuler-solutions/wiki/ProjectEulerSolutions
Thought I'd share my approach:
Hackerrank, which has a Project Euler section, goes by the TDD paradigm. It scores your algorithm using unknown test cases. They provide one sample test case to get you started. I develop offline and write some other test cases to validate my solution to get quicker and more precise feedback.
Where would one get those cases? You can do them by hand, and perhaps generate them from your own brute forcing code which is run locally. The beauty of this is that you must account for edge cases yourself, which is more typical of a real life scenario.
Example of tests in JavaScript:
var cases = [
{input: '1\n15', output: '45'},
...
];
describe('Multiples of 3 and 5', function() {
cases.forEach((v, i) => {
it('test case #' + i, function () {
assert.equal(unit(v.input), v.output);
})
});
});
Although Hackerrank uses stdin and stdout, I still try to isolate the main code into a function and employ functional programming.
In JUnit FAQ you can read that you shouldn't test methods that are too simple to break. While all examples seem logical (getters and setters, delegation etc.), I'm not sure I am able to grasp the "can't break on its own" concept in full. When would you say that the method "can't break on its own"? Anyone care to elaborate?
I think "can't break on its own" means that the method only uses elements of its own class, and does not depend upon the behavior of any other objects/classes, or that it delegates all of its functionality to some other method or class (which presumably has its own tests).
The basic idea is that if you can see everything the method does, without needing to refer to other methods or classes, and you are pretty sure it is correct, then a test is probably not necessary.
There is not necessarily a clear line here. "Too simple to break" is in the eye of the beholder.
Try thinking of it this way. You're not really testing methods. You're describing some behaviour and giving some examples of how to use it, so that other people (including your later self) can come and change that behaviour safely later. The examples happen to be executable.
If you think that people can change your code safely, you don't need to worry.
No matter how simple a method is, it can still be wrong. For example you might have two similarly named variables and access the wrong one. However, these errors will likely be quickly found and once these methods are written correctly, they are going to stay correct and so it is not worthwhile permanently keeping around a test for this. Rather than "too simple to break", I would recommend considering whether it is too simple to be worth keeping a permanent test.
Put it this way, you're building a wood table.
You'll test things that may fail. For instance, putting a jar in the table, or sitting over the table, or pushing the table from one side to another in the room, etc. You're testing table, in a way you know it is somehow vulnerable or at least in a way you know you'll use it.
You don't test though, nails, or one of its legs, because they are "too simple to break on its own".
Same goes for unit testing, you don't test getters/setters, because the only way they may fail, it because the runtime environment fail. You don't test methods that forward the message to other methods, because they are too simple to break on it own, you better test the referenced method.
I hope this helps.
If you are using TDD, that advice is wrong. In the TDD case your function exists because the test failed. It doesn't matter if the function is simple or not.
But if you are adding tests afterwards to some existing code, I can sort of understand the argument that you shouldn't need to test code that cannot break. But I still think that is just an excuse for not having to write tests. Also ask yourself: if that piece of code is not worthy of testing, then maybe that code is not needed at all?
I like the risk based approach of GAMP 5 which basically means (in this context) to first asses the various possible risks of a software and only define tests for the higher-risk parts.
Although this applies to GxP environments, it can be adapted in the way: How likely is a certain class to have erroneous methods, and how big is the impact an error would have? E.g., if a method decides whether to give a user access to a resource, you must of course test that extensively enough.
That means in deterimining where to draw the line between "too simple to break" it can help to take into consideration the possible consequences of a potential flaw.