Related
I have recently started using SSDT database unit tests. I find I have many test conditions which are identical, but I can't find a way to reuse them. How many times do I need to say, "and only one row may be returned"?
Similarly, I have begun to find multiple unit tests which need the same data. This seems to mean that I should use the same pre-test step. But I also can't find a way to reuse the pre-test steps.
Is there something I'm missing? Either that there is a way to reuse these components, or that there is some reason why I don't really need to reuse them?
First and foremost there is no way to reuse pre-tests, post-tests or test conditions across multiple tests. Yes this is annoying however I can kind of understand it - ensuring each test is self-contained strikes me as a good idea.
One way I've gotten around the problem of inability to reuse test data in the past is to write stored procedures that create that test data and include those stored procedures in the SSDT project that I am testing, then I just call the stored procedures from Pre-Test.
The downside of that approach is that those stored procedures then end up as part of your deployed database - not a good thing as you don't want stored procs that exist solely for creating test data in your production databases. The way to get around that is to put these "test data" stored procs into a dedicated SSDT project and create a "Same database" database reference from that project to the SSDT project that contains all your code to be tested. Deploying the second project will also deploy the first one. This technique is better known as Composite Projects.
So that's one way of solving the "sharing test data" problem. I don't know of a way of sharing test conditions other than going and hacking the underlying C# code-behind but that's not something I'd be comfortable in recommending.
In a unit test class there is a section for (Common scripts) found in the same drop down as the individual tests you add.
This section has "Test initialize" and "Test cleanup" where you can add code that runs before and after every test in that class.
Your general test structure would look like this;
"Common scripts > Test initialize" : Setup generic data (and check any test initialize conditions).
"My Test > Pre-test" : make any test specific data adjustments (and check any pre-test conditions).
"My Test > Test" : Execute the testable code and check any test specific conditions.
"My Test > Post-test" : clean up any test specific data (and check any post-test conditions).
"Common scripts > Test cleanup" : clean up any generic data (and check and cleanup conditions).
That is half the battle. Unfortunately there doesn't appear to be a way to run the same test conditions for a number of tests.
However! Test initialize and Test cleanup can have their own test conditions, so if each of your unit tests for a particular stored procedure should result in the same output, e.g. always one row in a table (like an audit entry), then you could add this test condition to the Test cleanup instead; while keeping specific test conditions in the test where they belong.
It could be a little misleading that a core test is found in the Test cleanup section so it is a bit of a work around.
You would have to ensure that the Test cleanup section doesn't remove any data you want to check in your general test conditions as it would be executed first. I work around this by doing my cleanup in the initialize section just before setting up the data, in effect the clean up always puts the database into a known state regardless of what test had run before.
For the audit example, step 5. might confirm that [dbo].[Audit] has one record while step 1. will delete from [dbo].[Audit] returning the row count to 0.
A similar problem is duplication of setup/teardown code. I like having a test class per stored procedure. Within a test class I often have many tests that look at the side effects and data changes caused by running the associated stored procedure.
I don't like that each class has >80% the same setup and teardown code. That means I have dozens of classes to change if my setup or teardown changes.
On the other hand, this post suggests that is just the way things should be: What does “DAMP not DRY” mean when talking about unit tests?
I am new to unit testing. I have created various tests and when I run test each one by one, all tests passing. However, when I run run on the whole as a batch, some tests failing. Why is that so? How can I correct that?
To resolve this issue it is important to follow certain rules when writing unit tests. Some of the rules are easy to follow and apply while other may need further considerations depending your circumstances.
Each test should set up a unique set of data. This is in particular important when you work with persistent data, e.g. in a database. When the test creates a user with a particular user id, then write the test so that it uses a different user id each time. For example (C#):
var user = new User(Guid.NewGuid());
At the end of each test, cleanup the data that the test has created. For example in the tear down method remove the data that you created (C#, NUnit):
[TearDown]
public void TheTearDownMethod() {
_repository.Delete(_user);
}
Variations are possible, e.g. when you test against a database you may choose to load a backup just before you run the test suite. If you craft your tests carefully you don't need to cleanup the database after each test (or subset of tests).
To get from where you are now (each test passes when run in isolation) to where you would like to be start with running the first two tests in sequence, make them pass. Then run three in sequence, make them pass, etc. In each iteration identify what previous test causes the added test to fail. Resolve that dependency. This way you learn a lot about your tests but also how to avoid writing tests that depend on each other.
Once the suite passes in one patch run it frequently so that you detect dependency as early as possible.
This doesn't cover all scenarios and variations but hopefully gives you a guideline for building on what you already have.
Probably some of your tests are dependent on the prior state of the machine. Unit tests should not depend on the previous state of the process/machine, so you should look at the failing tests and work out what they are depending on.
Sometimes final conditions of one test have an impact on initial conditions of the next.
Manual run and batch run may have different behaviour regarding how initial conditions of each test are set.
You obviously have side effects from some tests that create unintended dependencies. Debug.
Good unit tests are atomic and with zero dependencies to other tests (important). A good practice is that each test creates (removes) everything it is dependent on before running the test. Cleaning up afterwards is also a good practice, it really helps and is recommended but not 100 % necessary.
I'm a controls developer and a relative newbie to unit testing. Almost daily, I fight the attitude that you cannot test controls because of the UI interaction. I'm producing a demonstration control to show that it's possible to dramatically reduce manual testing if the control is designed to be testable. Currently I've got 50% logic coverage, but I think I could bump that up to 75% or higher if I could find a way to test some of the more complicated parts.
For example, I have a class with properties that describe the control's state and a method that generates a WPF PathGeometry object made of several segments. The implementation looks something like this:
internal PathGeometry CreateOuterGeometry()
{
double arcRadius = OuterCoordinates.Radius;
double sweepAngle = OuterCoordinates.SweepAngle;
ArcSegment outerArc = new ArcSegment(...);
LineSegment arcEndToCenter = new LineSegment(...);
PathFigure fig = new PathFigure();
// configure figure and add segments...
PathGeometry outerGeometry = new PathGeometry();
outerGeometry.Figures.Add(fig);
return outerGeometry;
}
I've got a few other methods like this that account for a few hundred blocks of uncovered code, an extra 25% coverage. I originally planned to test these methods, but rejected the notion. I'm still a unit testing newbie, and the only way I could think of to test the code would be several methods like this:
void CreateOuterGeometry_AngleIsSmall_ArcSegmentIsCorrect()
{
ClassUnderTest classUnderTest = new ClassUnderTest();
// configure the class under test...
ArcSegment expectedArc = // generate expected Arc...
PathGeometry geometry = classUnderTest.CreateOuterGeometry()
ArcSegment arc = geometry.Figures.Segments[0];
Assert.AreEqual(expectedArc, arc)
}
The test itself looks fine; I'd write one for each expected segment. But I had some problems:
Do I need tests to verify "Is the first segment an ArcSegment?" In theory the test tests this, but shouldn't each test only test one thing? This sounds like two things.
The control has at least six cases for calculation and four edge cases; this means for each method I need at least ten tests.
During development I changed how the various geometries were generated several times. This would cause me to have to rewrite all of the tests.
The first problem gave me pause because it seemed like it might inflate the number of tests. I thought I might have to test things like "Were there x segments?" and "Is segment n the right type?", but now that I've thought more I see that there's no branching logic in the method so I only need to do those tests once. The second problem made me more confident that there would be much effort associated with the test. It seems unavoidable. The third problem compounds the first two. Every time I changed the way the geometry was calculated, I'd have to edit an estimated 40 tests to make them respect the new logic. This would also include adding or removing tests if segments were added or removed.
Because of these three problems, I opted to write an application and manual test plan that puts the control in all of the interesting states and asks the user to verify it looks a particular way. Was this wrong? Am I overestimating the effort involved with writing the unit tests? Is there an alternative way to test this that might be easier? (I'm currently studying mocks and stubs; it seems like it'd require some refactoring of the design and end up being approximately as much effort.)
Use dependency injection and mocks.
Create interfaces for ArcSegmentFactory, LineSegmentFactory, etc., and pass a mock factory to your class. This way, you'll isolate the logic that is specific to this object (this should make testing easier), and won't be depending on the logic of your other objects.
About what to test: you should test what's important. You probably have a timeline in which you want to have things done, and you probably won't be able to test every single thing. Prioritize stuff you need to test, and test in order of priority (considering how much time it will take to test). Also, when you've already made some tests, it gets much easier to create new tests for other stuff, and I don't really see a problem in creating multiple tests for the same class...
About the changes, that's what tests are for: allowing you to change and don't really fear your change will bring chaos to the world.
You might try writing a control generation tool that generates random control graphs, and test those. This might yield some data points that you might not have thought of.
In our project, we use JUnit to perform tests which are not, strictly speaking, unit tests. We find, for example, that it's helpful to hook up a blank database and compare an automatic schema generated by Hibernate (an Object-Relational Mapping tool) to the actual schema for our test database; this helps us catch a lot of issues with wrong database mappings. But in general... you should only be testing one method, on one class, in a given test method. That doesn't mean you can't do multiple assertions against it to examine various properties of the object.
My approach is to convert the graph into a string (one segment per line) and compare this string to an expected result.
If you change something in your code, tests will start to fail but all you need to do is to check that the failures are in the right places. Your IDE should offer a side-by-side diff for this.
When you're confident that the new output is correct, just copy it over the old expected result. This will make sure that a mistake won't go unnoticed (at least not for long), the tests will still be simple and they are quick to fix.
Next, if you have common path parts, then you can put them into individual strings and build the expected result of a test from those parts. This allows you to avoid repeating yourself (and if the common part changes, you just have to update a single place for all tests).
If I understand your example correctly, you were trying to find a way to test whether a whole bunch of draw operations produce a given result.
Instead of human eyes, you could have produced a set of expected images (a snapshot of verified "good" images), and created unit tests which use the draw operations to create the same set of images and compare the result with an image comparison. This would allow you to automate the testing of the graphic operations, which is what I understand your problem to be.
The textbook way to do this would be to move all the business logic to libraries or controllers which are called by a 1 line method in the GUI. That way you can unit test the controller or library without dealing with the GUI.
I'm new to unit testing so I'd like to get the opinion of some who are a little more clued-in.
I need to write some screen-scraping code shortly. The target system is a web ui where there'll be copious HTML parsing and similar volatile goodness involved. I'll never be notified of any changes by the target system (e.g. they put a redesign on their site or otherwise change functionality). So I anticipate my code breaking regularly.
So I think my real question is, how much, if any, of my unit testing should worry about or deal with the interface (the website I'm scraping) changing?
I think unit tests or not, I'm going to need to test heavily at runtime since I need to ensure the data I'm consuming is pristine. Even if I ran unit tests prior to every run, the web UI could still change between tests and runtime.
So do I focus on in-code testing and exception handling? Does that mean to draw a line in the sand and exclude this kind of testing from unit tests altogether?
Thanks
Unit testing should always be designed to have repeatable known results.
Therefore, to unit test a screen-scraper, you should be writing the test against a known set of HTML (you may use a mock object to represent this)
The sort of thing you are talking about doesn't really sound like a scenario for unit-testing to me - if you want to ensure your code runs as robustly as possible, then it is more, as you say, about in-code testing and exception handling.
I would also include some alerting code, so they system made you aware of any occasions when the HTML does not get parsed as expected.
You should try to separate your tests as much as possible. Test the data handling with low level tests that execute the actual code (i.e. not via a simulated browser).
In the simulated browser, just make sure that the right things happen when you click on buttons, when you submit forms, and when you follow links.
Never try to test whether the layout is correct.
I think the thing unit tests might be useful for here is if you have a build server they will give you an early warning the code no longer works. You can't write a unit test to prove that screenscraping will still work if the site changes its HTML (because you can't tell what they will change).
You might be able to write a unit test to check that something useful is returned from your efforts.
Ok, maybe I'm missing something, but I really don't see the point of Selenium. What is the point of opening the browser using code, clicking buttons using code, and checking for text using code? I read the website and I see how in theory it would be good to automatically unit test your web applications, but in the end doesn't it just take much more time to write all this code rather than just clicking around and visually verifying things work?
I don't get it...
It allows you to write functional tests in your "unit" testing framework (the issue is the naming of the later).
When you are testing your application through the browser you are usually testing the system fully integrated. Consider you already have to test your changes before committing them (smoke tests), you don't want to test it manually over and over.
Something really nice, is that you can automate your smoke tests, and QA can augment those. Pretty effective, as it reduces duplication of efforts and gets the whole team closer.
Ps as any practice that you are using the first time it has a learning curve, so it usually takes longer the first times. I also suggest you look at the Page Object pattern, it helps on keeping the tests clean.
Update 1: Notice that the tests will also run javascript on the pages, which helps testing highly dynamic pages. Also note that you can run it with different browsers, so you can check cross-browser issues(at least on the functional side, as you still need to check the visual).
Also note that as the amount of pages covered by tests builds up, you can create tests with complete cycles of interactions quickly. Using the Page Object pattern they look like:
LastPage aPage = somePage
.SomeAction()
.AnotherActionWithParams("somevalue")
//... other actions
.AnotherOneThatKeepsYouOnthePage();
// add some asserts using methods that give you info
// on LastPage (or that check the info is there).
// you can of course break the statements to add additional
// asserts on the multi-steps story.
It is important to understand that you go gradual about this. If it is an already built system, you add tests for features/changes you are working on. Adding more and more coverage along the way. Going manual instead, usually hides what you missed to test, so if you made a change that affects every single page and you will check a subset (as time doesn't allows), you know which ones you actually tested and QA can work from there (hopefully by adding even more tests).
This is a common thing that is said about unit testing in general. "I need to write twice as much code for testing?" The same principles apply here. The payoff is the ability to change your code and know that you aren't breaking anything.
Because you can repeat the SAME test over and over again.
If your application is even 50+ pages and you need to do frequent builds and test it against X number of major browsers it makes a lot of sense.
Imagine you have 50 pages, all with 10 links each, and some with multi-stage forms that require you to go through the forms, putting in about 100 different sets of information to verify that they work properly with all credit card numbers, all addresses in all countries, etc.
That's virtually impossible to test manually. It becomes so prone to human error that you can't guarantee the testing was done right, never mind what the testing proved about the thing being tested.
Moreover, if you follow a modern development model, with many developers all working on the same site in a disconnected, distributed fashion (some working on the site from their laptop while on a plane, for instance), then the human testers won't even be able to access it, much less have the patience to re-test every time a single developer tries something new.
On any decent size of website, tests HAVE to be automated.
The point is the same as for any kind of automated testing: writing the code may take more time than "just clicking around and visually verifying things work", maybe 10 or even 50 times more.
But any nontrivial application will have to be tested far more than 50 times eventually, and manual tests are an annoying chore that will likely be omitted or done shoddily under pressure, which results in bugs remaining undiscovered until just bfore (or after) important deadlines, which results in stressful all-night coding sessions or even outright monetary loss due to contract penalties.
Selenium (along with similar tools, like Watir) lets you run tests against the user interface of your Web app in ways that computers are good at: thousands of times overnight, or within seconds after every source checkin. (Note that there are plenty of other UI testing pieces that humans are much better at, such as noticing that some odd thing not directly related to the test is amiss.)
There are other ways to involve the whole stack of your app by looking at the generated HTML rather than launching a browser to render it, such as Webrat and Mechanize. Most of these don't have a way to interact with JavaScript-heavy UIs; Selenium has you somewhat covered here.
Selenium will record and re-run all of the manual clicking and typing you do to test your web application. Over and over.
Over time studies of myself have shown me that I tend to do fewer tests and start skipping some, or forgetting about them.
Selenium will instead take each test, run it, if it doesn't return what you expect it, it can let you know.
There is an upfront cost of time to record all these tests. I would recommend it like unit tests -- if you don't have it already, start using it with the most complex, touchy, or most updated parts of your code.
And if you save those tests as JUnit classes you can rerun them at your leisure, as part of your automated build, or in a poor man's load test using JMeter.
In a past job we used to unit test our web-app. If the web-app changes its look the tests don't need to be re-written. Record-and-replay type tests would all need to be re-done.
Why do you need Selenium? Because testers are human beings. They go home every day, can't always work weekends, take sickies, take public holidays, go on vacation every now and then, get bored doing repetitive tasks and can't always rely on them being around when you need them.
I'm not saying you should get rid of testers, but an automated UI testing tool complements system testers.
The point is the ability to automate what was before a manual and time consuming test. Yes, it takes time to write the tests, but once written, they can be run as often as the team wishes. Each time they are run, they are verifying that behavior of the web application is consistent. Selenium is not a perfect product, but it is very good at automating realistic user interaction with a browser.
If you do not like the Selenium approach, you can try HtmlUnit, I find it more useful and easy to integrate into existing unit tests.
For applications with rich web interfaces (like many GWT projects) Selenium/Windmill/WebDriver/etc is the way to create acceptance tests. In case of GWT/GXT, the final user interface code is in JavaScript so creating acceptance tests using normal junit test cases is basically out of question. With Selenium you can create test scenarios matching real user actions and expected results.
Based on my experience with Selenium it can reveal bugs in the application logic and user interface (in case your test cases are well written). Dealing with AJAX front ends requires some extra effort but it is still feasible.
I use it to test multi page forms as this takes the burden out of typing the same thing over and over again. And having the ability to check if certain elements are present is great. Again, using the form as an example your final selenium test could check if something like say "Thanks Mr. Rogers for ordering..." appears at the end of the ordering process.