I would like to create functional tests to cover my Django project. It is a multipage form that accepts input on each page. The input on previous pages changes the content/options for the current pages. The testing is currently set up with Splinter and PhantomJS. I see two main ways to set this up.
For each page, create an instance of that page using stored data and point Splinter towards that.
Benefits
Allows random access testing of any page in the app
Can reuse unit test definitions to populate these synthetic pages
Downsides
Needs some kind of back end to be set up that Splinter can point to (At this point I have no idea how this would work, but it seems time consuming)
Structure the testing so that it is done in order, with the test content of page 1 passed to page 2
Benefits
Seems like it should just work out of the box
Downsides
Does not allow the tests to be run in an arbitrary order/only one at a time
Would probably take longer to run
Errors on earlier pages will affect later pages
I've found many tutorials on how to do functional testing on a small scale (individual pages/features/etc.) but I'm trying to figure out if there is an accepted way or best practice on how to structure it over a large project. Is it one of these? Something else that I've missed?
What I was looking for was fixtures (https://docs.djangoproject.com/en/1.11/ref/django-admin/#django-admin-dumpdata). Things just get way too complex if you're trying to pass browser state between a whole project worth of tests. Easy to grab the DB state, easy to load in.
My Django app is in need of some automated testing.
Many of the views produce tabular data (from a generic list view). I have created fixtures to test some of the more complex cases that have been causing subtle bugs.
What should I be using to test the value in a specific table cell (or column)?
There seems to be a lot of testing tools / libraries out there django-test client, Selenium, Nose. A lot of things seem to be aimed at unit testing (while I am not finding so many bugs at this level). I am looking more to integration testing. Reading all the documentation for all the libraries is going to take a while to find what I want.
So can someone advise what libraries / tools I should use to check the final output values in my list view's tabular output? I would like to give a URL, and confirm that the page returned has a value in particular row / column is equal to my expected value.
There seems to be a lot of testing tools / libraries out there django-test client, Selenium, Nose. A lot of things seem to be aimed at unit testing (while I am not finding so many bugs at this level). I am looking more to integration testing.
It works very well for integration testing. Maybe you would have found out if you had tried ? Maybe it's time for you to (re?) read about the correct hacker attitude
Also, I've been having loads of fun with ghost.py
Reading all the documentation for all the libraries is going to take a while to find what I want.
It's your work to do research as well. Believe it or not it took me like 5 hours to check all solutions yesterday and decide to go with ghost.py and get it to work nicely with django (hence the gist upload !).
But yeah, if you don't want to learn anything new then you're stuck at "Not knowing out to do integration testing". If you want to learn "how to make integration testing" then you have to do research. There's no secret my friend :)
After learning about TDD and unit testing, I'm really looking to write tests when I code, preferably before I code because I see the benefits coding in Python. I'm developing a website, and I'm trying to write tests according to the requirements, but its proving more difficult than expected.
I can see the benefits of writing tests when you're producing a library of code with a public interface for others to use. Developing a website, where there is really not much logic, and mostly Reading and Writing against a database seems a little harder to unit test. Mostly, I have to create/edit/delete rows in the database.
I'm using a framework (Kohana 3 for php), so 99% of all the libraries and helpers that I'm going to be using have already been tested (hopefully), so what else is their to write tests for?
I'm mostly talking about scripting languages, not about CSS or HTML, I already know about cross-browser testing.
How much can you really test when developing a web site, and how should you go about it?
Edit: Is the lack of activity on this question a sign? I understand that certain things MUST be tested, like security and the like, but how much can be written using unit tests and TDD is the question.
Thanks.
Developing a website, where there is really not much logic, and mostly Reading and Writing against a database seems a little harder to unit test. Mostly, I have to create/edit/delete rows in the database.
Not completely true.
You have data model processing. Does the validation work? Do the calculations on the reported rows from the database work?
You have control, sequence and navigation among pages -- do the links work? The test setup will provide a logged-in-user. The test will (1) do a GET or a POST to fetch a page, then (2) confirm the page actually loaded and has the right stuff.
You have authorization -- who can do what? Each distinct test setup will provide a different logged-in-user. The tests will (1) attempt a GET or POST to process a page. Some tests will (2) confirm they got directed to error-response pages. Some tests will (2) confitrm that the GET or POST worked.
You have content on the page -- what data was fetched? The test setup will provide a logged-in-user. The test will (1) do a GET or a POST to fetch a page, then (2) confirm the page actually loaded and has the right stuff.
Have you tried Selenium? It allows you to automatically do almost anything in a web browser. For example, you could have it go through and click all of the links and make sure that they go to the correct url.
It works with multiple languages, including python and allows for testing in chrome, firefox, ie, and other browsers.
If your site contains many forms, how do you write them? Do you write each view using plain HTML? Or do you write your own form helpers that generate forms just the way you want them? If you do that, you may find that unit-testing your form generators makes it easier to write them.
In general, if your program is mostly CRUD, look out for ways to automate CRUD management; write your own custom CRUD generator. Which does not mean write the CRUD framework that will end all frameworks; that would be too much work. Just write a generator for the small things you need for your current application. TDD will help you there.
Ok, maybe I'm missing something, but I really don't see the point of Selenium. What is the point of opening the browser using code, clicking buttons using code, and checking for text using code? I read the website and I see how in theory it would be good to automatically unit test your web applications, but in the end doesn't it just take much more time to write all this code rather than just clicking around and visually verifying things work?
I don't get it...
It allows you to write functional tests in your "unit" testing framework (the issue is the naming of the later).
When you are testing your application through the browser you are usually testing the system fully integrated. Consider you already have to test your changes before committing them (smoke tests), you don't want to test it manually over and over.
Something really nice, is that you can automate your smoke tests, and QA can augment those. Pretty effective, as it reduces duplication of efforts and gets the whole team closer.
Ps as any practice that you are using the first time it has a learning curve, so it usually takes longer the first times. I also suggest you look at the Page Object pattern, it helps on keeping the tests clean.
Update 1: Notice that the tests will also run javascript on the pages, which helps testing highly dynamic pages. Also note that you can run it with different browsers, so you can check cross-browser issues(at least on the functional side, as you still need to check the visual).
Also note that as the amount of pages covered by tests builds up, you can create tests with complete cycles of interactions quickly. Using the Page Object pattern they look like:
LastPage aPage = somePage
.SomeAction()
.AnotherActionWithParams("somevalue")
//... other actions
.AnotherOneThatKeepsYouOnthePage();
// add some asserts using methods that give you info
// on LastPage (or that check the info is there).
// you can of course break the statements to add additional
// asserts on the multi-steps story.
It is important to understand that you go gradual about this. If it is an already built system, you add tests for features/changes you are working on. Adding more and more coverage along the way. Going manual instead, usually hides what you missed to test, so if you made a change that affects every single page and you will check a subset (as time doesn't allows), you know which ones you actually tested and QA can work from there (hopefully by adding even more tests).
This is a common thing that is said about unit testing in general. "I need to write twice as much code for testing?" The same principles apply here. The payoff is the ability to change your code and know that you aren't breaking anything.
Because you can repeat the SAME test over and over again.
If your application is even 50+ pages and you need to do frequent builds and test it against X number of major browsers it makes a lot of sense.
Imagine you have 50 pages, all with 10 links each, and some with multi-stage forms that require you to go through the forms, putting in about 100 different sets of information to verify that they work properly with all credit card numbers, all addresses in all countries, etc.
That's virtually impossible to test manually. It becomes so prone to human error that you can't guarantee the testing was done right, never mind what the testing proved about the thing being tested.
Moreover, if you follow a modern development model, with many developers all working on the same site in a disconnected, distributed fashion (some working on the site from their laptop while on a plane, for instance), then the human testers won't even be able to access it, much less have the patience to re-test every time a single developer tries something new.
On any decent size of website, tests HAVE to be automated.
The point is the same as for any kind of automated testing: writing the code may take more time than "just clicking around and visually verifying things work", maybe 10 or even 50 times more.
But any nontrivial application will have to be tested far more than 50 times eventually, and manual tests are an annoying chore that will likely be omitted or done shoddily under pressure, which results in bugs remaining undiscovered until just bfore (or after) important deadlines, which results in stressful all-night coding sessions or even outright monetary loss due to contract penalties.
Selenium (along with similar tools, like Watir) lets you run tests against the user interface of your Web app in ways that computers are good at: thousands of times overnight, or within seconds after every source checkin. (Note that there are plenty of other UI testing pieces that humans are much better at, such as noticing that some odd thing not directly related to the test is amiss.)
There are other ways to involve the whole stack of your app by looking at the generated HTML rather than launching a browser to render it, such as Webrat and Mechanize. Most of these don't have a way to interact with JavaScript-heavy UIs; Selenium has you somewhat covered here.
Selenium will record and re-run all of the manual clicking and typing you do to test your web application. Over and over.
Over time studies of myself have shown me that I tend to do fewer tests and start skipping some, or forgetting about them.
Selenium will instead take each test, run it, if it doesn't return what you expect it, it can let you know.
There is an upfront cost of time to record all these tests. I would recommend it like unit tests -- if you don't have it already, start using it with the most complex, touchy, or most updated parts of your code.
And if you save those tests as JUnit classes you can rerun them at your leisure, as part of your automated build, or in a poor man's load test using JMeter.
In a past job we used to unit test our web-app. If the web-app changes its look the tests don't need to be re-written. Record-and-replay type tests would all need to be re-done.
Why do you need Selenium? Because testers are human beings. They go home every day, can't always work weekends, take sickies, take public holidays, go on vacation every now and then, get bored doing repetitive tasks and can't always rely on them being around when you need them.
I'm not saying you should get rid of testers, but an automated UI testing tool complements system testers.
The point is the ability to automate what was before a manual and time consuming test. Yes, it takes time to write the tests, but once written, they can be run as often as the team wishes. Each time they are run, they are verifying that behavior of the web application is consistent. Selenium is not a perfect product, but it is very good at automating realistic user interaction with a browser.
If you do not like the Selenium approach, you can try HtmlUnit, I find it more useful and easy to integrate into existing unit tests.
For applications with rich web interfaces (like many GWT projects) Selenium/Windmill/WebDriver/etc is the way to create acceptance tests. In case of GWT/GXT, the final user interface code is in JavaScript so creating acceptance tests using normal junit test cases is basically out of question. With Selenium you can create test scenarios matching real user actions and expected results.
Based on my experience with Selenium it can reveal bugs in the application logic and user interface (in case your test cases are well written). Dealing with AJAX front ends requires some extra effort but it is still feasible.
I use it to test multi page forms as this takes the burden out of typing the same thing over and over again. And having the ability to check if certain elements are present is great. Again, using the form as an example your final selenium test could check if something like say "Thanks Mr. Rogers for ordering..." appears at the end of the ordering process.
We develop custom survey web sites and I am looking for a way to automate the pattern testing of these sites. Surveys often contain many complex rules and branches which are triggered on how items are responded too. All surveys are rigorously tested before being released to clients. This testing results in a lot of manual work. I would like to learn of some options I could use to automate these tests by responding to questions and verifying the results in the database. The survey sites are produced by an engine which creates and writes asp pages and receives the responses to process into a database. So the only way I can determine to test the site is to interact with the web pages themselves. I guess in a way I need to build some type of bot; I really don't know much about the design behind them.
Could someone please provide some suggestions on how to achieve this? Thank you for your time.
Brett
Check out selenium: http://selenium.openqa.org/
Also, check out the answers to this other question: https://stackoverflow.com/questions/484/how-do-you-test-layout-design-across-multiple-browsersoss
You could also check out WatiN.
Sounds like your engine could generate a test script using something like Test::WWW::Mechanize
Usual test methodologies applies; white box and black box.
White box testing for you may mean instrumenting your application to be able to make it go into a particular state, then you can predict the the result you expect.
Black box may mean that you hit a page, then consider of the possible outcomes valid. Repeat and rinse till you get sufficient coverage.
Another thing we use is monitoring statistics for our service. Did we get the expected number of hits on this page. We routinely run a/b tests, and I have run a/b tests against refactored code to verify that nothing changed before rolling things out.
/Allan
I can think of a couple of good web application testing suites that should get the job done - one free/open source and one commercial:
Selenium (open source/cross platform)
TestComplete (commercial/Windows-based)
Both will let you create test suites by verifying database records based on interactions with the web app.
The fact that you're Windows/ASP based might mean that TestComplete will get you up and running faster, as it's native to Windows and .NET. You can download a free trial to see if it'll work for you before making the investment.
Check out the unit testing framework 'lime' that comes with the Symfony framework. http://www.symfony-project.org/book/1_0/15-Unit-and-Functional-Testing. You didn't mention you language, lime is php.
I would suggest the mechanize gem,available for ruby . It's pretty intuitive to use .
I use the QEngine(commerical) for the same purpose. I need to add a data and check the same in the UI. I write one script which does this and call that in a loop. the data can be passed via either csv or excel.
check that www.qengine.com , you can try Watir also.
My proposal is QA Agent (http://qaagent.com). It seems this is a new approach because you do not need to install anything. Just develop your web tests in the browser based ide. By the way you can develop your tests using jQuery and java script. Really cool!