Unit tests are strongly connected to the existing code of the application, written by developers. But what about UI and API automated tests (integration tests)? Does anybody think, that it's acceptable to re-use code of the application in separate automation solution?
The answer would be no. UI tests follow the UI, go to that page, input that value in that textbox, press that button, I should see this text. You don't need anything code related for this. All this should be done against some acceptance criteria so you should already know what to expect, without looking at any code.
For the API integration tests you would call the endpoints, with some payloads and then check the results. You don't need references to any code for this. The API should be documented and explain very well what endpoints are available, what the payloads look like and what you can expect to get back.
I am not sure why you'd think about reusing application code inside an automation project.
Ok, so after clarifications, you're talking about reusing models only, not actual code. This is not a bad idea, it can actually help, as long as these nuget packages do not bring in any other dependencies.
Code Reusability is a great concept but it's very hard to get right in practice. Models typically come with adnotations which require other packages which are of course not needed in an automation project. So, if you can get nuget packages without extra dependencies, so literally data models only and nothing else then that does work. Anything more than that and it will create issues so I'd push back on that
Related
So i have this huge SF2 project, which is luckily pretty 'OK' written. Services are there, background jobs are there, no god classes, it's testable--but, i never gotten any further than just unit-testing stuff, so the question is basically, where do i start taking this further.
The project consists of SF2 and all the yada yada, Doctrine2, Beanstalkd, Gaufrette, some other abstractions--its fine.
The one problem it has is some gluecode in controllers here and there, but i don't see it as a big problem since functional tests are going to me the main focus.
The infrastructure is setup pretty ok as well, its covered by docker so CI is going to work out well also.
But it has basically gotten too large to manually test any longer, so i want full functional coverage on short notice, and let the unit-testing grow over time. (Gonna dive into the isolated objects as they need future adjustments and build test for them in due course)
So i got the unit-testing covered, thats going to need to grow over time, but i want to make some steps towards the functional testing to get some quick gains on the testing dep. YESTERDAY.
My plan as of now is use Behat and Mink for this, the tests are going to be huge, so i might as well want to have it set as stories instead of code. Behat also seem to have a extension for Symfony' BrowserKit.
There are plenty of services and external things happening, but they are all isolated by services, so i can mock them through the test environment service config i guess.
Please some advice here if there is as better way
I'm also going to need fixtures, i'm using Alice for generating some fixtures so far, seems nice together with the doctrine extension, don't think there are "better" options on this one.
How should i test external services? Im mocking things as a Facebook service, but i also want to really test it to some test account, is this advisable? I know that this goes beyond its scope, the service has to be mocked and tested in every way possible to "ensure its working" according to the purist. But in the end of the day it still breaks because of some API key or other problem in the connection, which i cant afford really. So please advice here also
All your suggestions to use other tools are welcome ofcourse, and especially if there is a good book that covers my story.
I'm glad you brought up behat, I was going to suggest the same thing.
I would consider starting with your most business critical pieces; unit test the extremely important business logic and use behat on the rest.
For the most part, I would create stubs for your services that have expected output for expected input. That way you can create failures based on specific input. You can override your services in your test config.
Another approach would be to do very thin functional testing where you make GET requests to all of your endpoints and look for 200's. This is a very quick way to make sure that your pages are at least loading. From there, you can start writing tests for your POST endpoints and expanding your suite further with more detailed test cases.
After learning about TDD and unit testing, I'm really looking to write tests when I code, preferably before I code because I see the benefits coding in Python. I'm developing a website, and I'm trying to write tests according to the requirements, but its proving more difficult than expected.
I can see the benefits of writing tests when you're producing a library of code with a public interface for others to use. Developing a website, where there is really not much logic, and mostly Reading and Writing against a database seems a little harder to unit test. Mostly, I have to create/edit/delete rows in the database.
I'm using a framework (Kohana 3 for php), so 99% of all the libraries and helpers that I'm going to be using have already been tested (hopefully), so what else is their to write tests for?
I'm mostly talking about scripting languages, not about CSS or HTML, I already know about cross-browser testing.
How much can you really test when developing a web site, and how should you go about it?
Edit: Is the lack of activity on this question a sign? I understand that certain things MUST be tested, like security and the like, but how much can be written using unit tests and TDD is the question.
Thanks.
Developing a website, where there is really not much logic, and mostly Reading and Writing against a database seems a little harder to unit test. Mostly, I have to create/edit/delete rows in the database.
Not completely true.
You have data model processing. Does the validation work? Do the calculations on the reported rows from the database work?
You have control, sequence and navigation among pages -- do the links work? The test setup will provide a logged-in-user. The test will (1) do a GET or a POST to fetch a page, then (2) confirm the page actually loaded and has the right stuff.
You have authorization -- who can do what? Each distinct test setup will provide a different logged-in-user. The tests will (1) attempt a GET or POST to process a page. Some tests will (2) confirm they got directed to error-response pages. Some tests will (2) confitrm that the GET or POST worked.
You have content on the page -- what data was fetched? The test setup will provide a logged-in-user. The test will (1) do a GET or a POST to fetch a page, then (2) confirm the page actually loaded and has the right stuff.
Have you tried Selenium? It allows you to automatically do almost anything in a web browser. For example, you could have it go through and click all of the links and make sure that they go to the correct url.
It works with multiple languages, including python and allows for testing in chrome, firefox, ie, and other browsers.
If your site contains many forms, how do you write them? Do you write each view using plain HTML? Or do you write your own form helpers that generate forms just the way you want them? If you do that, you may find that unit-testing your form generators makes it easier to write them.
In general, if your program is mostly CRUD, look out for ways to automate CRUD management; write your own custom CRUD generator. Which does not mean write the CRUD framework that will end all frameworks; that would be too much work. Just write a generator for the small things you need for your current application. TDD will help you there.
I'm writing an upload function for ColdFusion of Wheels and need to unit test it once it's finished. The problem I'm having though is that I have no idea on how to create a mock multi-part form post in ColdFusion that I can use in my unit tests.
What I would like to be able to do is create the mock request simulating a file being uploaded that cffile could then process and I could check against.
I saw in the online ColdFusion help, an example of creating such a request using cfhttp, however it has to post to another page which kind of defeats that whole purpose.
Great question rip. For what it's worth, I contribute to MXUnit (wrote the eclipse plugin) and this scenario came up in a presentation I did at cfobjective this year on writing easier-to-test code (http://mxunit.org/doc/zip/marc_esher_cfobjective_2009_designing_for_easy_testability.zip).
In this scenario, I'd suggest NOT testing the upload. I believe we shouldn't spend time testing stuff that's not our code. The likelihood that we'll catch a bug or other oddity is sufficiently low for me to justify leaving it untested. I believe we should be testing "our" code, however.
In your scenario, you have two behaviors: 1) the upload and 2) the post-upload behavior. I'd test the post-upload behavior.
This now frees your unit test to not care about the source of the file. Notice how this actually results in a decoupling of the upload logic from the "what do I do with the file?" logic. Taken to its conclusion, this at least creates the potential (theoretically) to reuse this post-upload logic for stuff other than just uploads.
This keeps your test much easier, because now you can just test against some file that you put somewhere in the setUp of the unit test itself.
So your component changes from
<cfffunction name="uploadAndDoStuff">
to
<cffunction name="upload">
and then
<cffunction name="handleUpload">
or "handleFile" or "doSomethingWithFile" or "processNetworkFile" or some other thing. and in your unit test, you leave upload() untested and just test your post-upload handler. For example, if I were doing this at work, and my requirements were: "Upload file; move file to queue for virus scanning; create thumbnails if image or jpg; add stuff to database; etc" then I'd keep each of those steps as separate functions, and I'd test them in isolation, because I know that "upload file" already works since it's worked since CF1.0 (or whatever). Make sense?
Better yet, leave the "upload" out of the component entirely. nothing wrong with keeping it in a CFM file since there's kind of not much point (as far as I can see) in attempting to genericize it. there might be benefits where mocking is concerned, but that's a different topic altogether.
I did a quick search for MXUnit testing a form upload and came up with this google groups thread : Testing a file upload
Its a discussion between Peter Bell and Bob Silverberg - the outcome of which is that testing a file upload is actually part of acceptance testing as opposed to a unit test.
I know that doesn't strictly answer your question, but I hope it helps.
Strictly speaking you are right. If you have to go out to make a call to an external resource do test something ( database, web service, file uploader) it does defeat the purpose of unit testing. The best general advice is to mock out behaviours of external resources and assume they function or are covered by their own unit tests.
Pragmatism can alter this though, on the Model-Glue framework codebase, we have a number of unit tests that call out to external resources, like for persisting values across a redirect, connecting the AbstractRemotingService functionality and so on. These were deemed important enough features to unit test and we chose to make external dependencies of our unit tests to ensure good coverage. After all, in framework code, there IS NOT acceptance tests. Those are done by our users and users of a framework expect flawless code, as they should.
So there are cases where you want to test a vital external resource and you want it handled by an automated function, it can make sense to add it to your unit tests. Just know you are deviating away from a best practice that is there for a reason.
Dan Wilson
How do you unit test a large MFC UI application?
We have a few large MFC applications that have been in development for many years, we use some standard automated QA tools to run basic scripts to check fundamentals, file open etc. These are run by the QA group post the daily build.
But we would like to introduce procedures such that individual developers can build and run tests against dialogs, menus, and other visual elements of the application before submitting code to the daily build.
I have heard of such techniques as hidden test buttons on dialogs that only appear in debug builds, are there any standard toolkits for this.
Environment is C++/C/FORTRAN, MSVC 2005, Intel FORTRAN 9.1, Windows XP/Vista x86 & x64.
It depends on how the App is structured. If logic and GUI code is separated (MVC) then testing the logic is easy. Take a look at Michael Feathers "Humble Dialog Box" (PDF).
EDIT: If you think about it: You should very carefully refactor if the App is not structured that way. There is no other technique for testing the logic. Scripts which simulate clicks are just scratching the surface.
It is actually pretty easy:
Assume your control/window/whatever changes the contents of a listbox when the user clicks a button and you want to make sure the listbox contains the right stuff after the click.
Refactor so that there is a separate list with the items for the listbox to show. The items are stored in the list and are not extracted from whereever your data comes from. The code that makes the listbox list things knows only about the new list.
Then you create a new controller object which will contain the logic code. The method that handles the button click only calls mycontroller->ButtonWasClicked(). It does not know about the listbox or anythings else.
MyController::ButtonWasClicked() does whats need to be done for the intended logic, prepares the item list and tells the control to update. For that to work you need to decouple the controller and the control by creating a interface (pure virtual class) for the control. The controller knows only an object of that type, not the control.
Thats it. The controller contains the logic code and knows the control only via the interface. Now you can write regular unit test for MyController::ButtonWasClicked() by mocking the control. If you have no idea what I am talking about, read Michaels article. Twice. And again after that.
(Note to self: must learn not to blather that much)
Since you mentioned MFC, I assumed you have an application that would be hard to get under an automated test harness. You'll observe best benefits of unit testing frameworks when you build tests as you write the code.. But trying to add a new feature in a test-driven manner to an application which is not designed to be testable.. can be hard work and well frustrating.
Now what I am going to propose is definitely hard work.. but with some discipline and perseverance you'll see the benefit soon enough.
First you'll need some management backing for new fixes to take a bit longer. Make sure everyone understands why.
Next buy a copy of the WELC book. Read it cover to cover if you have the time OR if you're hard pressed, scan the index to find the symptom your app is exhibiting. This book contains a lot of good advice and is just what you need when trying to get existing code testable.
Then for every new fix/change, spend some time and understand the area you're going to work on. Write some tests in a xUnit variant of your choice (freely available) to exercise current behavior.
Make sure all tests pass. Write a new test which exercises needed behavior or the bug.
Write code to make this last test pass.
Refactor mercilessly within the area under tests to improve design.
Repeat for every new change that you have to make to the system from here on. No exceptions to this rule.
Now the promised land: Soon ever growing islands of well tested code will begin to surface. More and more code would fall under the automated test suite and changes will become progressively easier to make. And that is because slowly and surely the underlying design becomes more testable.
The easy way out was my previous answer. This is the difficult but right way out.
I realize this is a dated question, but for those of us who still work with MFC, the Microsoft C++ Unit Testing Framework in VS2012 works well.
The General Procedure:
Compile your MFC Project as a static library
Add a new Native Unit Test Project to your solution.
In the Test Project, add your MFC Project as a Reference.
In the Test Project's Configuration Properties, add the Include directories for your header files.
In the Linker, input options add your MFC.lib;nafxcwd.lib;libcmtd.lib;
Under 'Ignore Specific Default Libraries' add nafxcwd.lib;libcmtd.lib;
Under General add the location of your MFC exported lib file.
The https://stackoverflow.com/questions/1146338/error-lnk2005-new-and-delete-already-defined-in-libcmtd-libnew-obj has a good description of why you need the nafxcwd.lib and libcmtd.lib.
The other important thing to check for in legacy projects. In General Configuration Properties, make sure both projects are using the same 'Character Set'. If your MFC is using a Multi-Byte Character Set you'll need the MS Test to do so as well.
Though not perfect, the best I have found for this is AutoIt http://www.autoitscript.com/autoit3
"AutoIt v3 is a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting. It uses a combination of simulated keystrokes, mouse movement and window/control manipulation in order to automate tasks in a way not possible or reliable with other languages (e.g. VBScript and SendKeys). AutoIt is also very small, self-contained and will run on all versions of Windows out-of-the-box with no annoying "runtimes" required!"
This works well when you have access to the source code of the application being driven, because you can use the resource ID number of the controls you want to drive. In this way you do not have to worry about simulated mouse clicks on particular pixels. Unfortunately, in a legacy application, you may well find that the resource ID are not unique, which may cause problems. However. it is very straightforward to change the IDs to be unique and rebuild.
The other issue is that you will encounter timing problems. I do not have a tried and true solution for these. Trial and error is what I have used, but this is clearly not scalable. The problem is that the AutoIT script must wait for the test application to respond to a command before the script issues the next command or check for the correct response. Sometimes it is not easy to find a convenient event to wait and watch for.
My feeling is that, in developing a new application, I would insist on a consistent way to signal "READY". This would be helpful to the human users as well as test scripts! This may be a challenge for a legacy application, but perhaps you can introduce it in problematical points and slowly spread it to the entire application as maintenance continues.
Although it cannot handle the UI side, I unit test MFC code using the Boost Test library. There is a Code Project article on getting started:
Designing Robust Objects with Boost
Well we have one of these humongous MFC Apps at the workplace. Its a gigantic pain to maintain or extend... its a huge ball of mud now but it rakes in the moolah.Anyways
We use Rational Robot for doing smoke tests and the like.
Another approach that has had some success is to create a small product-specific language and script tests that use VBScript and some Control handle spying magic. Turn common actions into commands.. e.g. OpenDatabase would be a command that in turn will inject the required script blocks to click on Main Menu > File > "Open...". You then create excel sheets which are a series of such commands. These commands can take parameters too. Something like a FIT Test.. but more work. Once you have most of the common commands identified and scripts ready. It's pick and assemble scripts (tagged by CommandIDs) to write new tests. A test-runner parses these Excel sheets, combines all the little script blocks into a test script and runs it.
OpenDatabase "C:\tests\MyDB"
OpenDialog "Add Model"
AddModel "M0001", "MyModel", 2.5, 100
PressOK
SaveDatabase
HTH
Actually we have been using Rational Team Test, then Robot, but in recent discussions with Rational we discovered they have no plans to support Native x64 applications focusing more on .NET, so we decided to switch Automated QA tools. This is great but licensing costs don't allow us to enable it for all developers.
All our applications support a COM API for scripting, which we regression test via VB, but this tests the API no the application as such.
Ideally I would be interested on how people integrate cppunit and similar unit testing frameworks into the application at a developer level.
Unit testing and ASP.NET web applications are an ambiguous point in my group. More often than not, good testing practices fall through the cracks and web applications end up going live for several years with no tests.
The cause of this pain point generally revolves around the hassle of writing UI automation mid-development.
How do you or your organization integrate best TDD practices with web application development?
Unit testing will be achievable if you separate your layers appropriately. As Rob Cooper implied, don't put any logic in your WebForm other than logic to manage your presentation. All other stuff logic and persistence layers should be kept in separate classes and then you can test those individually.
To test the GUI some people like selenium. Others complain that is a pain to set up.
I layer out the application and at least unit test from the presenter/controller (whichever is your preference, mvc/mvp) to the data layer. That way I have good test coverage over most of the code that is written.
I have looked at FitNesse, Watin and Selenium as options to automate the UI testing but I haven't got around to using these on any projects yet, so we stick with human testing. FitNesse was the one I was leaning toward but I couldn't introduce this as well as introducing TDD (does that make me bad? I hope not!).
This is a good question, one that I will be subscribing too :)
I am still relatively new to web dev, and I too am looking at a lot of code that is largely untested.
For me, I keep the UI as light as possible (normally only a few lines of code) and test the crap out of everything else. At least I can then have some confidence that everything that makes it to the UI is as correct as it can be.
Is it perfect? Perhaps not, but at least it as still quite highly automated and the core code (where most of the "magic" happens) still has pretty good coverage..
I would generally avoid testing that involves relying on UI elements. I favor integration testing, which tests everything from your database layer up to the view layer (but not the actual layout).
Try to start a test suite before writing a line of actual code in a new project, since it's harder to write tests later.
Choose carefully what you test - don't mindlessly write tests for everything. Sometimes it's a boring task, so don't make it harder. If you write too many tests, you risk abandoning that task under the weight of time-consuming maintenance.
Try to bundle as much functionality as possible into a single test. That way, if something goes wrong, the errors will propagate anyway. For example, if you have a digest-generating class - test the actual output, not every single helper function.
Don't trust yourself. Assume that you will always make mistakes, and so you write tests to make your life easier, not harder.
If you are not feeling good about writing tests, you are probably doing it wrong ;)
A common practice is to move all the code you can out of the codebehind and into an object you can test in isolation. Such code will usually follow the MVP or MVC design patterns. If you search on "Rhino Igloo" you will probably find the link to its Subversion repository. That code is worth a study, as it demonstrate one of the best MVP implementations on Web Forms that I have seen.
Your codebehind will, when following this pattern, do two things:
Transit all user actions to the presenter.
Render data provided by the presenter.
Unit testing the presenter should be trivial.
Update: Rhino Igloo can be found here: https://svn.sourceforge.net/svnroot/rhino-tools/trunk/rhino-igloo/
There have been tries on getting Microsoft's free UI Automation (included in .NET Framework 3.0) to work with web applications (ASP.NET). A german company called Artiso happens to have written a blog entry that explains how to achieve that (link).
However, their blogpost also links an MSDN Webcasts that explains the UI Automation Framework with winforms and after I had a look at this, I noticed you need the AutomationId to get a reference to the respecting controls. However, in web applications, the controls do not have an AutomationId.
I asked Thomas Schissler (Artiso) about this and he explained that this was a major drawback on InternetExplorer. He referenced an older technology of Microsoft (MSAA) and was hoping himself that IE8 will do this better.
However, I was also giving Watin a try and it seems to work pretty well. I even liked Wax, which allows to implement simple testcases via Microsoft Excel worksheets.
Ivonna can unit test your views. I'd still recommend moving most of the code to other parts. However, some code just belongs there, like references to controls or control event handlers.