Testing Attribute Behavior - unit-testing

I’m using SharpArch with SharpArch.Contrib’s [Transaction] attribute. The Transaction attribute is added to an application service method and if an exception is thrown during that method then any changes to any domain objects are rolledback. From all appearances this works well.
However, I’m writing NUnit tests to confirm that exceptions are being thrown when appropriate (invalid state, security errors, etc) but I also want to confirm that the Transaction attribute is present and doing its job to rollback the changes. Is there any way I can do this?
I do trust that SharpArch.Contrib’s Transaction attribute is solid code, but some future programmer could accidentally remove the Transaction attribute from a method or disable it during testing which would not be caught by the unit tests. Am I being overly cautious?
Thanks
Dan

I think you're being a little paranoid, but that's OK :-)
If you trust the SharpArch code, then I wouldn't worry about the exception being thrown back to you correctly. Assume that it works and pick up any problems during integration or functional testing. Testing third party code only has value when you either don't trust it or you're trying to understand it.
On the other hand if you want to test for the presence of an attribute (i.e. validate that the method has the appropriate attribute on it) then you can write a test that uses reflection to inspect the method signature and do some assertions. It's not too hard to do - you just use the methodinfo object to work out the attributes on the method and scan for the TransactionAttribute.

I think testing the transactionality of your code makes perfect sense. I would do it on integration level, when you connect to the real database - then it is easy to generate wrong data, catch the exception and assert that there were no changes done in the database.
I am not sure how you would test it on a unit test level.

Related

How is unit testing testing anything?

I don't understand how I'm testing anything with unit testing.
Suppose I am testing that my repository class can retrieve values from the database correctly. The proper way to do this would be to actually call the real database and retrieve and check those values.
But the idea behind unit testing is that it should be done in isolation, and connecting to a running database is not isolation. So what is usually done is to mock or stub the database.
But why would testing on a fake database with hardcoded data and hardcoded return values even test anything? It seems tautological and a waste of time.
Or am I not understanding how to unit test properly?
Does one even unit test database calls?
I don't understand how I'm testing anything with unit testing.
Short answer: you are testing the logic, and leaving out the side effects.
You aren't testing everything; but you are testing something.
Furthermore, if you keep in mind that you aren't really testing the code with side effects, then you are motivated to arrange your code so that the pieces that actually depend on the side effect are small. The big pieces don't actually care where the data comes from, to those are easy to test.
So "something" can be "most things".
There is an impedance problem -- if your test doubles impersonate the production originals inadequately, then some of your test results will be inaccurate.
my philosophy is to test as little as possible to reach a given level of confidence
Kent Beck, 2008
One way of imagining "as little as possible" is to think in terms of cost -- we're aiming for a given confidence level, so we want to achieve as much of that confidence as we can using cheap unit tests, and then make up the difference with more expensive techniques.
Cory Benfield's talk Building Protocol Libraries the Right Way describes an example of the kind of separation we're talking about here. The logic of how to parse an HTTP message is separable from the problem of reading the bytes. If you make the complicated part easy to test, and the hard to test part too simple to fail, your chances of succeeding are quite good.
I think your concern is valid. For me, TDD is more of an evolutionary design practice than unit testing practice, but I'll save that for another discussion.
In your example, what we are really testing is that the logic contained within your individual classes is sound. By stubbing the data coming from the database you have a controlled scenario that you can ensure your code works for that particular scenario. This makes it much easier to ensure full test coverage for all data scenarios. You're correct that this really doesn't test the whole system end to end, but the point is to reduce the overall test maintenance costs and enable faster feedback.
My approach is to mock most collaborators at the unit test level, then write acceptance tests at the integration test level, which validates your system using real data. Because the unit tests with their mocked data allows you to test various data scenarios out, you only need to test a few of those scenarios using integration tests to feel confident that your code will perform as you expect.
You can test your code against actual database in isolation. Just create new database instance for every test, or execute tests synchronously one after another and clean database before next test.
But using actual database will make your tests slow, which will slow down your work, because you want quick feedback on what you are doing.
Do not test every class - test main feature logic, which can use many different classes and mock/stub only dependencies which makes tests slow.
Find your application boundaries and tests logic between them without mocking.
For example in trivial web api application boundaries can be:
- controller action -> request(input)
- controller action -> response(output)
- database -> side effect of received request.
Assume we live in perfect world where new database and web server setup will takes milliseconds. Then you will tests whole pipeline of your application:
1. Configure database for test
2. Send request to the web api server
3. Assert that response contains expected data
4. Assert that database state changed as expected
But in now days world your boundaries will be controller action and abstracted database access point. Which makes your test look like below:
1. Configure mocked database access point(repository)
2. Call controller action with given parameters
3. Assert that action returns expected result
4. Possibly assert that mocked repository received expected update arguments.
If your application have no logic, just read/update data from database - test with actual database or, if your database framework allows it, use database in-memory.

Grails Testing hickups

I have two testing questions. Both are probably easily answered. The first is that I wrote this unit test in Grails:
void testCount() {
mockDomain(UserAccount)
new UserAccount(firstName: "Ken").save()
new UserAccount(firstName: "Bob").save()
new UserAccount(firstName: "Dave").save()
assertEquals(3, UserAccount.count())
}
For some reason, I get 0 returned back. Did I forget to do something?
EDIT: OH, I understand. The validation constraints were violated, so they didn't store. Is there any way to get some feedback here? That's a really crappy thing to have happen....
The second question is for those who use IDEA. What should I be running - IDEA's junit tests, or grails targets? I have two options.
Also, why does IDEA say that my tests pass and it provides a green light even though the test above actually fails? This will really drive me nuts if I have to check the test reports in html every time I run my tests.....
Help?
I always do object.save(failOnError: true) in tests to avoid silent failures like this. This causes an exception to be thrown if validation fails. Even without a real database in a unit test, most of the constraints will be checked, although I prefer to use integration tests if I want to test complex relationships between domain objects.
I personally haven't found the Idea JUnit tests to particularly useful when working with grails. It is likely fine to use the test runner for "Unit" tests. For integration tests you might consider setting up an ant target in "debug" mode to run your tests.
Over time running tests starts to occupy such a long amount of time I tend to run them exclusively from the command line to avoid the additional overhead IntelliJ adds.
In regards to your unit test, I am pretty sure you would need to run an integration test to get a count that is not zero.
I'm not sure what unit test your using exactly but since GORM is not bootstrapped in the unit tests I'm not sure the domain object mocking supports the increment of a count.
Your test would likely pass as an integration test provided that your domain objects validate.
add flush:true to your save method.
new UserAccount(firstName: "Ken").save(flush:true)
...
Grails sets the flush mode of the hibernate session to manual. So the change is not persisted after the action returns but is before the view is rendered. This allows views to access lazy-loaded collections and relationships and prevents changes from automatically being persisted.

How far should I go with unit testing?

I'm trying to unit test in a personal PHP project like a good little programmer, and I'd like to do it correctly. From what I hear what you're supposed to test is just the public interface of a method, but I was wondering if that would still apply to below.
I have a method that generates a password reset token in the event that a user forgets his or her password. The method returns one of two things: nothing (null) if everything worked fine, or an error code signifying that the user with the specified username doesn't exist.
If I'm only testing the public interface, how can I be sure that the password reset token IS going in the database if the username is valid, and ISN'T going in the database if the username is NOT valid? Should I do queries in my tests to validate this? Or should I just kind of assume that my logic is sound?
Now this method is very simple and this isn't that big of a deal - the problem is that this same situation applies to many other methods. What do you do in database centric unit tests?
Code, for reference if needed:
public function generatePasswordReset($username)
{
$this->sql='SELECT id
FROM users
WHERE username = :username';
$this->addParam(':username', $username);
$user=$this->query()->fetch();
if (!$user)
return self::$E_USER_DOESNT_EXIST;
else
{
$code=md5(uniqid());
$this->addParams(array(':uid' => $user['id'],
':code' => $code,
':duration' => 24 //in hours, how long reset is valid
));
//generate new code, delete old one if present
$this->sql ='DELETE FROM password_resets WHERE user_id=:uid;';
$this->sql.="INSERT INTO password_resets (user_id, code, expires)
VALUES (:uid, :code, now() + interval ':duration hours')";
$this->execute();
}
}
The great thing about unit testing, for me at least, is that it shows you where you need to refactor. Using your sample code above, you've basically got four things happening in one method:
//1. get the user from the DB
//2. in a big else, check if user is null
//3. create a array containing the userID, a code, and expiry
//4. delete any existing password resets
//5. create a new password reset
Unit testing is also great because it helps highlight dependencies. This method, as shown above, is dependent on a DB, rather than an object that implements an interface. This method interacts with systems outside its scope, and really could only be tested with an integration test, rather than a unit test. Unit tests are for ensuring the working/correctness of a unit of work.
Consider the Single Responsibility Principle: "Do one thing". It applies to methods as well as classes.
I'd suggest that your generatePasswordReset method should be refactored to:
be given a pre-defined existing user object/id. Do all those sanity checks outside of this method. Do one thing.
put the password reset code into its own method. That would be a single unit of work that could be tested independently of the SELECT, DELETE and INSERT.
Make a new method that could be called OverwriteExistingPwdChangeRequests() which would take care of the DELETE + INSERT.
The reason this function is more difficult to unit test is because the database update is a side-effect of the function (i.e. there is no explicit return for you to test).
One way of dealing with state updates on remote objects like this is to create a mock object that provides the same interface as the DB (i.e. it looks identical from the perspective of your code). Then in your test you can check the state changes within this mock object and confirm you have received what you should.
You can break it down some more, that function is doing alot which makes testing it a bit tricky, not impossible but tricky. If on the other hand you pulled out some smaller extra functions (getUserByUsername, deletePasswordByUserID, addPasswordByUserId, etc. Then you can test those easily enough once and know they work so you don't have to test them again. This way you test the lower down calls making sure they are sound so you don't have to worry about them further up the chain. Then for this function all you need to do is throw it a user that does not exist and make sure it comes back with a USER_DOESNT_EXIST error then one where a user does exist (this is where you test DB comes in). The inner works have already been exercised else where (hopefully).
Unit tests serve the purpose of verifying that a unit works. If you care to know whether a unit works or not, write a test. It's that simple. Choosing to write a unit test or not shouldn't be based on some chart or rule of thumb. As a professional it's your responsibility to deliver working code, and you can't know if it's working or not unless you test it.
Now, that doesn't mean you write a test for each and every line of code. Nor does it necessarily mean you write a unit test for every single function. Deciding to test or not test a particular unit of work boils down to risk. How willing are you to risk that your piece of untested code gets deployed?
If you're asking yourself "how do I know if this functionality works", the answer is "you don't, until you have repeatable tests that prove it works".
In general one might "mock" the object you are invoking, verifying that it receives the expected requests.
In this case I'm not sure how helpful that is, you amost end up writing the same logic twice ... we thought we sent "DELETE from password" etc. Oh look we did!
Hmmm, what did we actually check. If the string was badly formed, we woudln't know!
It may be against the letter of Unit testing law, but I would instead test these side-effects by doing separate queries against the database.
Testing the public interface is necessary, but not sufficient. There are many philosophies on how much testing is required, and I can only give my own opinion. Test everything. Literally. You should have a test that verifies that each line of code has been exercised by the test suite. (I only say 'each line' because I'm thinking of C and gcov, and gcov provides line-level granularity. If you have a tool that has finer resolution, use it.) If you can add a chunk of code to your code base without adding a test, the test suite should fail.
Databases are global variables. Global variables are public interfaces for every unit that uses them. Your test cases must therefore vary inputs not only on the function parameter, but also the database inputs.
If your unit tests have side-effects (like changing a database) then they have become integration tests. There is nothing wrong in itself with integration tests; any automated testing is good for the quality of your product. But integration tests have a higher maintenance cost because they are more complex and are easier to break.
The trick is therefore to minimize the code that can only be tested with side effects. Isolate and hide the SQL queries in a separate MyDatabase class which does not contain any business logic. Pass an instance of this object to your business logic code.
Then when you unit test your business logic, you can substitute the MyDatabase object with a mock instance which is not connected to a real database, and which can be used to verify that your business logic code uses the database correctly.
See the documentation of SimpleTest (a php mocking framework) for an example.

unit tests for screen-scraping?

I'm new to unit testing so I'd like to get the opinion of some who are a little more clued-in.
I need to write some screen-scraping code shortly. The target system is a web ui where there'll be copious HTML parsing and similar volatile goodness involved. I'll never be notified of any changes by the target system (e.g. they put a redesign on their site or otherwise change functionality). So I anticipate my code breaking regularly.
So I think my real question is, how much, if any, of my unit testing should worry about or deal with the interface (the website I'm scraping) changing?
I think unit tests or not, I'm going to need to test heavily at runtime since I need to ensure the data I'm consuming is pristine. Even if I ran unit tests prior to every run, the web UI could still change between tests and runtime.
So do I focus on in-code testing and exception handling? Does that mean to draw a line in the sand and exclude this kind of testing from unit tests altogether?
Thanks
Unit testing should always be designed to have repeatable known results.
Therefore, to unit test a screen-scraper, you should be writing the test against a known set of HTML (you may use a mock object to represent this)
The sort of thing you are talking about doesn't really sound like a scenario for unit-testing to me - if you want to ensure your code runs as robustly as possible, then it is more, as you say, about in-code testing and exception handling.
I would also include some alerting code, so they system made you aware of any occasions when the HTML does not get parsed as expected.
You should try to separate your tests as much as possible. Test the data handling with low level tests that execute the actual code (i.e. not via a simulated browser).
In the simulated browser, just make sure that the right things happen when you click on buttons, when you submit forms, and when you follow links.
Never try to test whether the layout is correct.
I think the thing unit tests might be useful for here is if you have a build server they will give you an early warning the code no longer works. You can't write a unit test to prove that screenscraping will still work if the site changes its HTML (because you can't tell what they will change).
You might be able to write a unit test to check that something useful is returned from your efforts.

How to unit-test a NextPasswordChangeDate function against the Active Directory

I am working on a project using the Active Directory, intensively. I set up a few unit tests for several things against the AD, some of which I achieve using mocked objects, some which I achieve through real calls against the AD.
As one of the functions of my project, I have to retrieve a so called "user profile". This user profile consists mostly of simple attributes, like "cn", "company", "employeeid", etc. However, one property that I am trying to fill is not a simple one "NextPasswordChangeDate".
To the best of my knowledge, the only way to get this, is by getting the domain policy's maxPwdAge and use this information together with pwdLastSet.
Now my question: How can I unit test this in an intelligent way? I came up with three options, all of which are not great:
Use my own account as the searched account, find out the date by other means and hard code it in the unit test. By this way, I can unit test my code well, but every month, I have to change the unit test, because I changed my password.
Use some account that has password never expires set. This is kind of pointless, because I cannot really test the correctness of my code by that.
Use a mock object and make sure that the correct API calls happen. This option allows to test the correctness of the function's behaviour, but then the tested logic is in fact in the unit test and hence I cannot be sure, that it is doing the right thing, even if the test is passed.
Which of the three do you suggest? Or maybe you have a better option?
Since 1 and 2 the rely on AD existing and having known values seem more like integration tests to me.
I generally take the side that any non-deterministic behavior should be interfaced out and mocked if possible (#3). As you noted this will always leave an amount of real implementation code that is not unit-testable, but would then be covered by your integration tests running against a known AD system.
Related Question/Answer