Implementing xunit in a new programming language - unit-testing

Some of us still "live" in a programming environment where unit testing has not yet been embraced. To get started, the obvious first step would be to try to implement a decent framework for unit testing, and I guess xUnit is the "standard".
So what is a good starting point for implementing xUnit in a new programming language?
BTW, since people are asking: My target environment is Visual Dataflex.

Which language is it for - there are quite a few in place already.

If this is stopping you from getting started with writing unit tests you could start out without a testing framework.
Example in C-style language:
void Main()
{
var algorithmToTest = MyUniversalQuestionSolver();
var question = Answer to { Life, Universe && Everything };
var actual = algorithmToTest(question);
var expected = 42;
if (actual != expected) Error();
// ... add a bunch of tests
}
Example in Cobol-style language:
MAIN.
COMPUTE EXPECTED_ANSWER = 42
SOLVE ANSWER_TO_EVERYTHING GIVING ACTUAL_ANSWER
SUBTRACT ACTUAL_ANSWER FROM EXPECTED_ANSWER GIVING DIFFERENCE
IF DIFFERENCE NOT.EQ 0 THEN
DISPLAY "ERROR!"
END-IF
* ... add a bunch of tests
STOP RUN
Run Main after you are finished with a changed (and possibly compile) on your code. Run main on the server whenever someone submits code to your repository.
When you get hooked, Look more for a framework or see if you possibly could factor out some of the bits from Main to your own framework.

I'd suggest that a good starting point would be to use xunit on a couple of other languages to get a feel for how this style of unit test framework works. Then you'll need to go in depth into the behaviour and start working out how to recreate that behaviour in a way that fits with your new language.

I created a decent unit test framework in VFP by basing it on the code in Test Driven Development: A Practical Guide, by David Astels. You'll get a long way by reading through the examples, understanding the techniques and translating the Java code into your language.

I found Pragmatic Unit Testing in C# with NUnit very helpful!

Related

Unit Testing Libgdx classes that call a ShaderProgram (such as a Stage) via a HeadlessApplication

I am trying to unit test the core package of my libgdx application.
What is the best way to mock the ShaderProgram such that the root class may be tested?
Given the following init for a Libgdx test runner,
init {
val conf = HeadlessApplicationConfiguration()
HeadlessApplication(this, conf)
Gdx.gl = mock(GL20::class.java)
Gdx.gl20 = mock(GL20::class.java)
Gdx.gl30 = mock(GL30::class.java)
Gdx.graphics = mock(Graphics::class.java)
`when`(Gdx.graphics.height).thenReturn(dimensions)
`when`(Gdx.graphics.width).thenReturn(dimensions)
}
and the function under test (which is in a class that is a child of Application Listener),
override fun create() {
...
stage = Stage(ScreenViewport())
...
}
an error occurs inside of Stage when it attempts to compile a shader.
I.e., in SpriteBatch.java from com.badlogic.gdx.graphics.g2d,
ShaderProgram shader = new ShaderProgram(vertexShader, fragmentShader);
if (shader.isCompiled() == false) throw new IllegalArgumentException("Error compiling shader: " + shader.getLog());
shader.isCompiled() appears to always return false for a HeadlessApplication.
Since there is no answer so far I want to share my current knowledge/opinion:
First, we need to ask the question: Can we even write unit tests for the GUI in general? If you want an in-depth answer take a look at this question. To sum up: You can unit test your GUI by calculating hashes from framebuffers but the general recommendation is to move as much logic out of your GUI as possible and don't unit test your GUI at all. In addition to that my and most Servers that are running the code have no hardware support for OpenGL. Assuming we don't want to bother with Softwarerendering, unit testing the GUI itself is no option.
Based on this information I assumed my logic wasn't separated well enough from my GUI. But when looking further into scene2d and a lot of tutorials around it, it became pretty clear that scene2d wants you by design to combine logic and GUI parts of your code, see this question for reference.
Assuming you share the opinion that scene2d makes it pretty hard to separate your logic and GUI we are now facing the op's question while common solutions are not applicable.
In my opinion, all elements of libGDX should be fully compatible with the HeadlessBackend, that they are not is a design flaw by libGDX. The only workaround I came up with so far is by mocking a SpriteBatch if needed and passing it to the Stage:
val stage = Stage(myViewport, if (isHeadless) mock(SpriteBatch::class.java) else SpriteBatch())
While this allows you to use your Stage normally, I don't consider it a real solution since you need to write your code test aware.

How to choose TDD starting point in a real world project?

I've read tons of articles, seen tons of screencasts about TDD, but I'm still struggling with using it in real world project. My main issue is I don't know where to start, what test should be the first one.
Suppose I have to write client library calling external system's methods (e.g. notification).
I want this client to work as follows
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Response code = client.notifyOnEvent(Event.LIMIT_REACHED, 100); // some params of call
There is some translation and message format preparation behind the scenes, so I'd like to hide it from my client apps.
I don't know where and how to start.
Should I make up some rough classes set for this library?
Should I start with testing NotificationClient as below
public void testClientSendInvalidEventCommand() {
NotificationClient client = new NotificationClient(...);
Response code = client.notifyOnEvent(Event.WRONG_EVENT);
assertEquals(1223, code.codeValue());
}
If so, with such test I'm forced to write complete working implementation at once, with no baby steps as TDD states. I can mock out sosmething in Client but then I have to know this thing to be mocked upfront, so I need some upfront desing to be made.
Maybe I should start from the bottom, test this message formatting component first and then use it in right client test?
What way is the right one to go?
Should we always start from top (how to deal with this huge step required)?
Can we start with any class realizing tiny part of desired feature (as Formatter in this example)?
If I'd know where to hit with my tests it'd be a lot easier for me to proceed.
I'd start with this line:
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Sounds like we need a NotificationClient, which needs a client ID. That's an easy thing to test for. My first test might look something like:
public void testNewClientAbcd1234HasClientId() {
NotificationClient client = new NotificationClient("abcd1234");
assertEquals("abcd1234", client.clientId());
}
Of course, it won't compile at first - not until I'd written a NotificationClient class with a constructor that takes a string parameter and a clientId() method that returns a string - but that's part of the TDD cycle.
public class NotificationClient {
public NotificationClient(string clientId) {
}
public string clientId() {
return "";
}
}
At this point, I can run my test and watch it fail (because I've hard-coded clientId()'s return to be an empty string). Once I've got my failing unit test, I write just enough production code (in NotificationClient) to get the test to pass:
public string clientId() {
return "abcd1234";
}
Now all my tests pass, so I can consider what to do next. The obvious (well, obvious to me) next step is to make sure that I can create clients whose ID isn't "abcd1234":
public void testNewClientBcde2345HasClientId() {
NotificationClient client = new NotificationClient("bcde2345");
assertEquals("bcde2345", client.clientId());
}
I run my test suite and observe that testNewClientBcde2345HasClientId() fails while testNewClientAbcd1234HasClientId() passes, and now I've got a good reason to add a member variable to NotificationClient:
public class NotificationClient {
private string _clientId;
public NotificationClient(string clientId) {
_clientId = clientId;
}
public string clientId() {
return _clientId;
}
}
Assuming no typographical errors have snuck in, that'll get all my tests to pass, and I can move on to whatever the next step is. (In your example, it would probably be testing that notifyOnEvent(Event.WRONG_EVENT) returns a Response whose codeValue() equals 1223.)
Does that help any?
Don't confuse acceptance tests that hook into each end of your application, and form an executable specifications with unit tests.
If you are doing 'pure' TDD you write an acceptance test which drives the unit tests that drive the implementation. testClientSendInvalidEventCommand is your acceptance test, but depending on how complicated things are you will delegate the implementation to multiple classes you can unit test separately.
How complicated things get before you have to split them up to test and understand them properly is why it is called Test Driven Design.
You can choose to let tests drive your design from the bottom up or from the top down. Both work well for different developers in different situations. Either approach will force to make some of those "upfront" design decisions but that's a good thing. Making those decisions in order to write your tests is test-driven design!
In your case you have an idea what the high level external interface to the system you are developing should be so let's start there. Write a test for how you think users of your notification client should interact with it and let it fail. This test is the basis for your acceptance or integration tests and they are going to continue failing until the features they describe are finished. That's ok.
Now step down one level. What are the steps which need to occur to provide that high level interface? Can we write an integration or unit test for those steps? Do they have dependencies you had not considered which might cause you to change the notification center interface you have started to define? Keep drilling down depth-first defining behavior with failing tests until you find that you have actually reached a unit test. Now implement enough to pass that unit test and continue. Get unit tests passing until you have built enough to pass an integration test and so on. You'll eventually have completed a depth-first construction of a tree of tests and should have a well tested feature whose design was driven by your tests.
One goal of TDD is that the testing informs the design. So the fact that you need to think about how to implement your NotificationClient is a good thing; it forces you to think of (hopefully) simple abstractions up front.
Also, TDD sort of assumes constant refactoring. Your first solution probably won't be the last; so as you refine your code the tests are there to tell you what breaks, from compile errors to actual runtime issues.
So I would just jump right in and start with the test you suggested. As you create mocks, you will need to create tests for the actual implementations of what you are mocking. You will find things make sense and need to be refactored, so you will need to modify your tests as you go. That's the way it's supposed to work...

Application Service Layer: Unit Tests, Integration Tests, or Both?

I've got a bunch of methods in my application service layer that are doing things like this:
public void Execute(PlaceOrderOnHoldCommand command)
{
var order = _repository.Load(command.OrderId);
order.PlaceOnHold();
_repository.Save(order);
}
And at present, I have a bunch of unit tests like this:
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
var repository = new Mock<IOrderRepository>();
const int orderId = 1;
var order = new Mock<IOrder>();
repository.Setup(r => r.Load(orderId)).Returns(order.Object);
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository.Object);
service.Execute(command);
repository.Verify(r => r.Load(It.Is<int>(x => x == orderId)), Times.Exactly(1));
}
[Test]
public void PlaceOrderOnHold_CallsPlaceOnHold()
{
/* blah blah */
}
[Test]
public void PlaceOrderOnHold_SavesOrderToRepository()
{
/* blah blah */
}
It seems to be debatable whether these unit tests add value that's worth the effort. I'm quite sure that the application service layer should be integration tested, though.
Should the application service layer be tested to this level of granularity, or are integration tests sufficient?
I'd write a unit test despite there also being an integration test. However, I'd likely make the test much simpler by eliminating the mocking framework, writing my own simple mock, and then combining all those tests to check that the the order in the mock repository was on hold.
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
const int orderId = 1;
var repository = new MyMockRepository();
repository.save(new MyMockOrder(orderId));
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository);
service.Execute(command);
Assert.IsTrue(repository.getOrder(orderId).isOnHold());
}
There's really no need to check to be sure that load and/or save is called. Instead I'd just make sure that the only way that MyMockRepository will return the updated order is if load and save are called.
This kind of simplification is one of the reasons that I usually don't use mocking frameworks. It seems to me that you have much better control over your tests, and a much easier time writing them, if you write your own mocks.
Exactly: it's debatable! It's really good that you are weighing the expense/effort of writing and maintaining your test against the value it will bring you - and that's exactly the consideration you should make for every test you write. Often I see tests written for the sake of testing and thereby only adding ballast to the code base.
As a guideline I usually take that I want a full integration test of every important successful scenario/use case. Other tests I'll write are for parts of the code that are likely to break with future changes, or have broken in the past. And that is definitely not all code. That's where your judgement and insight in the system and requirements comes into play.
Assuming that you have an (integration) test for service.Execute(placeOrderOnHoldCommand), I'm not really sure if it adds value to test if the service loads an order from the repository exactly once. But it could be! For instance when your service previously had a nasty bug that would hit the repository ten times for a single order, causing performance issues (just making it up). In that case, I'd rename the test to PlaceOrderOnHold_LoadsOrderFromRepositoryExactlyOnce().
So for each and every test you have to decide for yourself ... hope that helps.
Notes:
The tests you show can be perfectly valid and look well written.
Your test sequence methods seems to be inspired on the way the Execute(...) method is currently implemented. When you structure your test this way, it could be that you are tying yourself to a specific implementation. This way, tests can actually make it harder to change - make sure you're only testing the important external behavior of your class.
I usually write a single integration test of the primary scenario. By primary scenario i mean the successful path of all the code being tested. Then I write unit tests of all the other scenarios like checking all the cases in a switch, testing exception and so forth.
I think it is important to have both and yes it is possible to test it all with integration tests only, but that makes your tests long running and harder to debug. In average I think I have 10 unit tests per integration test.
I don't bother to test methods with one-liners unless something bussines logic-like happens in that line.
Update: Just to make it clear, cause I'm doing test-driven development I always write the unit tests first and typically do the integration test at the end.

Understanding metaClass in Grails tests

I'm currently learning grails, and working through the guide on testing.
There's an example provided which covers writing a test for this piece of code in a fictional BookController:
def show = {
[ book : Book.get( params.id ) ]
}
The guide suggests the following approach for mocking out the result of params.id:
void testA() {
BookController.metaClass.getParams = {-> [id:10] }
}
As this is a change on the static definition of BookController, does this persist between tests, or does the Grails magic somehow automatically clean up in the tearDown method?
ie, if I was to write a subsequent test that skipped the setup of metaClass.getParams and that ran after testA, would params.id still return 10?
If so, what's the standard grails practice for cleaning up in test tear-down? It doesn't seem to be covered in the guide that I'm reading.
You're using an ancient version of the docs covering 1.0.x. Testing support is a lot more solid now, so see the updated chapter 9 in http://grails.org/doc/latest/

Unit testing functions with side effects?

Let's say you're writing a function to check if a page was reached by the appropriate URL. The page has a "canonical" stub - for example, while a page could be reached at stackoverflow.com/questions/123, we would prefer (for SEO reasons) to redirect it to stackoverflow.com/questions/123/how-do-i-move-the-turtle-in-logo - and the actual redirect is safely contained in its own method (eg. redirectPage($url)), but how do you properly test the function which calls it?
For example, take the following function:
function checkStub($questionId, $baseUrl, $stub) {
canonicalStub = model->getStub($questionId);
if ($stub != $canonicalStub) {
redirectPage($baseUrl . $canonicalStub);
}
}
If you were to unit test the checkStub() function, wouldn't the redirect get in the way?
This is part of a larger problem where certain functions seem to get too big and leave the realm of unit testing and into the world of integration testing. My mind immediately thinks of routers and controllers as having these sorts of problems, as testing them necessarily leads to the generation of pages rather than being confined to just their own function.
Do I just fail at unit testing?
You say...
This is part of a larger problem where certain functions seem to get too big and leave the realm of unit testing and into the world of integration testing
I think this is why unit testing is (1) hard and (2) leads to code that doesn't crumble under its own weight. You have to be meticulous about breaking all of your dependencies or you end up with unit tests == integration tests.
In your example, you would inject a redirector as a dependency. You use a mock, double or spy. Then you do the tests as #atk lays out. Sometimes it's not worth it. More often it forces you to write better code. And it's hard to do without an IOC container.
This is an old question, but I think this answer is relevant. #Rob states that you would inject a redirector as a dependency - and sure, this works. However, your problem is that you don't have a good separation of concerns.
You need to make your functions as atomic as possible, and then compose larger functionality using the granular functions you've created. You wrote this:
function checkStub($questionId, $baseUrl, $stub) {
canonicalStub = model->getStub($questionId);
if ($stub != $canonicalStub) {
redirectPage($baseUrl . $canonicalStub);
}
}
I'd write this:
function checkStubEquality($stub1, $stub2) {
return $stub1 == $stub2;
}
canonicalStub = model->getStub($questionId);
if (!checkStubEquality(canonicalStub, $stub)) redirectPage($baseUrl . $canonicalStub);
It sounds like you just have another test case. You need to check that the stub is identified correctly as a stub with both positive and negative testing, and you need to check that the page to which you are redirected is correct.
Or do I totally misunderstand the question?