So i have this huge SF2 project, which is luckily pretty 'OK' written. Services are there, background jobs are there, no god classes, it's testable--but, i never gotten any further than just unit-testing stuff, so the question is basically, where do i start taking this further.
The project consists of SF2 and all the yada yada, Doctrine2, Beanstalkd, Gaufrette, some other abstractions--its fine.
The one problem it has is some gluecode in controllers here and there, but i don't see it as a big problem since functional tests are going to me the main focus.
The infrastructure is setup pretty ok as well, its covered by docker so CI is going to work out well also.
But it has basically gotten too large to manually test any longer, so i want full functional coverage on short notice, and let the unit-testing grow over time. (Gonna dive into the isolated objects as they need future adjustments and build test for them in due course)
So i got the unit-testing covered, thats going to need to grow over time, but i want to make some steps towards the functional testing to get some quick gains on the testing dep. YESTERDAY.
My plan as of now is use Behat and Mink for this, the tests are going to be huge, so i might as well want to have it set as stories instead of code. Behat also seem to have a extension for Symfony' BrowserKit.
There are plenty of services and external things happening, but they are all isolated by services, so i can mock them through the test environment service config i guess.
Please some advice here if there is as better way
I'm also going to need fixtures, i'm using Alice for generating some fixtures so far, seems nice together with the doctrine extension, don't think there are "better" options on this one.
How should i test external services? Im mocking things as a Facebook service, but i also want to really test it to some test account, is this advisable? I know that this goes beyond its scope, the service has to be mocked and tested in every way possible to "ensure its working" according to the purist. But in the end of the day it still breaks because of some API key or other problem in the connection, which i cant afford really. So please advice here also
All your suggestions to use other tools are welcome ofcourse, and especially if there is a good book that covers my story.
I'm glad you brought up behat, I was going to suggest the same thing.
I would consider starting with your most business critical pieces; unit test the extremely important business logic and use behat on the rest.
For the most part, I would create stubs for your services that have expected output for expected input. That way you can create failures based on specific input. You can override your services in your test config.
Another approach would be to do very thin functional testing where you make GET requests to all of your endpoints and look for 200's. This is a very quick way to make sure that your pages are at least loading. From there, you can start writing tests for your POST endpoints and expanding your suite further with more detailed test cases.
I have a huge group of unit tests that I'm running against our software where I work. I'm running them using IntelliJ IDEA. I'm using Spock and Groovy to make these tests, but they are, in turn, using JUnit. So anything that applies to JUnit should apply here as well... (in theory)
In order to figure out what coverage is I'm right clicking on the root of the code I want to run and simply selecting "Test in 'example' with Coverage".
IntelliJ then proceeds to try and run the 270 or some odd tests we have in this part of the project. The problem is that it isn't running a couple of them and I can't figuring out why for the life of me.
I've googled the issue and turned up nothing substantial. The list of tests in IntelliJ only tells me that these couple of tests didn't start, it doesn't give a reason at all. I tried checking the logs, but all it says about these particular tests is that it's trying to open them, no comment on how successful they were or were not, and why they aren't working.
If someone could just point me in a useful direction? I need to expand my test bases coverage and being able to see what code is actually covered is a pretty critical...
Some Clarification:
The tests can be run on their own, and they do computer coverage for themselves if you do so. They only can't be run when I run them in a batch.
I'm pretty sure it's not a display issue because I get several different reports telling me that the tests were not run. So, more than just the frozen yellow spinner it also warns me with a red "attention" bubble that the tests did not run.
I have two testing questions. Both are probably easily answered. The first is that I wrote this unit test in Grails:
void testCount() {
mockDomain(UserAccount)
new UserAccount(firstName: "Ken").save()
new UserAccount(firstName: "Bob").save()
new UserAccount(firstName: "Dave").save()
assertEquals(3, UserAccount.count())
}
For some reason, I get 0 returned back. Did I forget to do something?
EDIT: OH, I understand. The validation constraints were violated, so they didn't store. Is there any way to get some feedback here? That's a really crappy thing to have happen....
The second question is for those who use IDEA. What should I be running - IDEA's junit tests, or grails targets? I have two options.
Also, why does IDEA say that my tests pass and it provides a green light even though the test above actually fails? This will really drive me nuts if I have to check the test reports in html every time I run my tests.....
Help?
I always do object.save(failOnError: true) in tests to avoid silent failures like this. This causes an exception to be thrown if validation fails. Even without a real database in a unit test, most of the constraints will be checked, although I prefer to use integration tests if I want to test complex relationships between domain objects.
I personally haven't found the Idea JUnit tests to particularly useful when working with grails. It is likely fine to use the test runner for "Unit" tests. For integration tests you might consider setting up an ant target in "debug" mode to run your tests.
Over time running tests starts to occupy such a long amount of time I tend to run them exclusively from the command line to avoid the additional overhead IntelliJ adds.
In regards to your unit test, I am pretty sure you would need to run an integration test to get a count that is not zero.
I'm not sure what unit test your using exactly but since GORM is not bootstrapped in the unit tests I'm not sure the domain object mocking supports the increment of a count.
Your test would likely pass as an integration test provided that your domain objects validate.
add flush:true to your save method.
new UserAccount(firstName: "Ken").save(flush:true)
...
Grails sets the flush mode of the hibernate session to manual. So the change is not persisted after the action returns but is before the view is rendered. This allows views to access lazy-loaded collections and relationships and prevents changes from automatically being persisted.
I'm new to unit testing so I'd like to get the opinion of some who are a little more clued-in.
I need to write some screen-scraping code shortly. The target system is a web ui where there'll be copious HTML parsing and similar volatile goodness involved. I'll never be notified of any changes by the target system (e.g. they put a redesign on their site or otherwise change functionality). So I anticipate my code breaking regularly.
So I think my real question is, how much, if any, of my unit testing should worry about or deal with the interface (the website I'm scraping) changing?
I think unit tests or not, I'm going to need to test heavily at runtime since I need to ensure the data I'm consuming is pristine. Even if I ran unit tests prior to every run, the web UI could still change between tests and runtime.
So do I focus on in-code testing and exception handling? Does that mean to draw a line in the sand and exclude this kind of testing from unit tests altogether?
Thanks
Unit testing should always be designed to have repeatable known results.
Therefore, to unit test a screen-scraper, you should be writing the test against a known set of HTML (you may use a mock object to represent this)
The sort of thing you are talking about doesn't really sound like a scenario for unit-testing to me - if you want to ensure your code runs as robustly as possible, then it is more, as you say, about in-code testing and exception handling.
I would also include some alerting code, so they system made you aware of any occasions when the HTML does not get parsed as expected.
You should try to separate your tests as much as possible. Test the data handling with low level tests that execute the actual code (i.e. not via a simulated browser).
In the simulated browser, just make sure that the right things happen when you click on buttons, when you submit forms, and when you follow links.
Never try to test whether the layout is correct.
I think the thing unit tests might be useful for here is if you have a build server they will give you an early warning the code no longer works. You can't write a unit test to prove that screenscraping will still work if the site changes its HTML (because you can't tell what they will change).
You might be able to write a unit test to check that something useful is returned from your efforts.
Ok, maybe I'm missing something, but I really don't see the point of Selenium. What is the point of opening the browser using code, clicking buttons using code, and checking for text using code? I read the website and I see how in theory it would be good to automatically unit test your web applications, but in the end doesn't it just take much more time to write all this code rather than just clicking around and visually verifying things work?
I don't get it...
It allows you to write functional tests in your "unit" testing framework (the issue is the naming of the later).
When you are testing your application through the browser you are usually testing the system fully integrated. Consider you already have to test your changes before committing them (smoke tests), you don't want to test it manually over and over.
Something really nice, is that you can automate your smoke tests, and QA can augment those. Pretty effective, as it reduces duplication of efforts and gets the whole team closer.
Ps as any practice that you are using the first time it has a learning curve, so it usually takes longer the first times. I also suggest you look at the Page Object pattern, it helps on keeping the tests clean.
Update 1: Notice that the tests will also run javascript on the pages, which helps testing highly dynamic pages. Also note that you can run it with different browsers, so you can check cross-browser issues(at least on the functional side, as you still need to check the visual).
Also note that as the amount of pages covered by tests builds up, you can create tests with complete cycles of interactions quickly. Using the Page Object pattern they look like:
LastPage aPage = somePage
.SomeAction()
.AnotherActionWithParams("somevalue")
//... other actions
.AnotherOneThatKeepsYouOnthePage();
// add some asserts using methods that give you info
// on LastPage (or that check the info is there).
// you can of course break the statements to add additional
// asserts on the multi-steps story.
It is important to understand that you go gradual about this. If it is an already built system, you add tests for features/changes you are working on. Adding more and more coverage along the way. Going manual instead, usually hides what you missed to test, so if you made a change that affects every single page and you will check a subset (as time doesn't allows), you know which ones you actually tested and QA can work from there (hopefully by adding even more tests).
This is a common thing that is said about unit testing in general. "I need to write twice as much code for testing?" The same principles apply here. The payoff is the ability to change your code and know that you aren't breaking anything.
Because you can repeat the SAME test over and over again.
If your application is even 50+ pages and you need to do frequent builds and test it against X number of major browsers it makes a lot of sense.
Imagine you have 50 pages, all with 10 links each, and some with multi-stage forms that require you to go through the forms, putting in about 100 different sets of information to verify that they work properly with all credit card numbers, all addresses in all countries, etc.
That's virtually impossible to test manually. It becomes so prone to human error that you can't guarantee the testing was done right, never mind what the testing proved about the thing being tested.
Moreover, if you follow a modern development model, with many developers all working on the same site in a disconnected, distributed fashion (some working on the site from their laptop while on a plane, for instance), then the human testers won't even be able to access it, much less have the patience to re-test every time a single developer tries something new.
On any decent size of website, tests HAVE to be automated.
The point is the same as for any kind of automated testing: writing the code may take more time than "just clicking around and visually verifying things work", maybe 10 or even 50 times more.
But any nontrivial application will have to be tested far more than 50 times eventually, and manual tests are an annoying chore that will likely be omitted or done shoddily under pressure, which results in bugs remaining undiscovered until just bfore (or after) important deadlines, which results in stressful all-night coding sessions or even outright monetary loss due to contract penalties.
Selenium (along with similar tools, like Watir) lets you run tests against the user interface of your Web app in ways that computers are good at: thousands of times overnight, or within seconds after every source checkin. (Note that there are plenty of other UI testing pieces that humans are much better at, such as noticing that some odd thing not directly related to the test is amiss.)
There are other ways to involve the whole stack of your app by looking at the generated HTML rather than launching a browser to render it, such as Webrat and Mechanize. Most of these don't have a way to interact with JavaScript-heavy UIs; Selenium has you somewhat covered here.
Selenium will record and re-run all of the manual clicking and typing you do to test your web application. Over and over.
Over time studies of myself have shown me that I tend to do fewer tests and start skipping some, or forgetting about them.
Selenium will instead take each test, run it, if it doesn't return what you expect it, it can let you know.
There is an upfront cost of time to record all these tests. I would recommend it like unit tests -- if you don't have it already, start using it with the most complex, touchy, or most updated parts of your code.
And if you save those tests as JUnit classes you can rerun them at your leisure, as part of your automated build, or in a poor man's load test using JMeter.
In a past job we used to unit test our web-app. If the web-app changes its look the tests don't need to be re-written. Record-and-replay type tests would all need to be re-done.
Why do you need Selenium? Because testers are human beings. They go home every day, can't always work weekends, take sickies, take public holidays, go on vacation every now and then, get bored doing repetitive tasks and can't always rely on them being around when you need them.
I'm not saying you should get rid of testers, but an automated UI testing tool complements system testers.
The point is the ability to automate what was before a manual and time consuming test. Yes, it takes time to write the tests, but once written, they can be run as often as the team wishes. Each time they are run, they are verifying that behavior of the web application is consistent. Selenium is not a perfect product, but it is very good at automating realistic user interaction with a browser.
If you do not like the Selenium approach, you can try HtmlUnit, I find it more useful and easy to integrate into existing unit tests.
For applications with rich web interfaces (like many GWT projects) Selenium/Windmill/WebDriver/etc is the way to create acceptance tests. In case of GWT/GXT, the final user interface code is in JavaScript so creating acceptance tests using normal junit test cases is basically out of question. With Selenium you can create test scenarios matching real user actions and expected results.
Based on my experience with Selenium it can reveal bugs in the application logic and user interface (in case your test cases are well written). Dealing with AJAX front ends requires some extra effort but it is still feasible.
I use it to test multi page forms as this takes the burden out of typing the same thing over and over again. And having the ability to check if certain elements are present is great. Again, using the form as an example your final selenium test could check if something like say "Thanks Mr. Rogers for ordering..." appears at the end of the ordering process.