I have been developing an application in my free time using Qt.
As the size of code is increasing I am finding it difficult to contain new bugs for older code. I have been testing my application manually.
Since the target is an exe I cannot test it automated with C++ tests without injecting some extra code into my application.
So my question is, what is the best QA technique for a GUI application if you are a single developer & wont be earning money from the project as it will be released for free?
Thank You.
EDIT:
I would like to have a set of simple tests, each testing for specific functionalities of my software. I would like them to run automatically one after another. Finally they should create a report of which tests failed. This can possibly be done by creating new functions in the same classes + adding some checks in existing functions I want to test & then create a new class which will have all the tests. So I wanted to know whether is this the best way or is there a better alternative? Because everytime I will build a release target, I will be commenting/deleting this QA code, which may create some bugs for that build.
Currently I am not worried about documentation & comments as I have maintained that from the beginning. It is only about source code QA.
Unit tests by-the-book will only give you assurance for your methods, not for the entire application. But you can also use the same unit-test framework to write acceptance tests for specific capabilities of the application.
The easiest way to go would be to extract the GUI from the application, and to make the GUI dependent of an API/library. The API will make it easy to write functional tests. Be sure to make the GUI as thin as possible.
I wouldn't add test code to your class and remove it to release, I think this is as risky as shipping with the test code. You're better off have separated source as already advised here.
If your project is getting large enough, you'll probably want to create some unit tests for it (I like the free CppUnit library, which is similar to JUnit; also Jo Are By suggested QtTest, which presumably is available with Qt).
Even if you have to make some changes to your production code, it will be worth your time in the end.
You may also wish to look into automated GUI testing frameworks for Qt applications; I'm not familiar with any of these.
Test code go to its own source file.
You may split your exe into library and one main.cpp which simply call your library.
That way, you may use any unitTest Framework with extra test files to generate a executable which only tests your library.
For code testing you will use Junit testcase
You may split your exe into library and one main.cpp which simply call your library.
For GUI testing you have do it Manually because there is not Tool Available to Test GUI interface of any application.
In Manual testing GUI is check complete and GUI image or text is not displayed clearly or text is missing is not all this is will not be test by automation.
Related
I've been studying Unit Testing with Google Test in C++.
If the purpose of Unit Testing is to ensure certain segments or objects of the code are working the way they are supposed to, I would assume it's not necessary to compile and export the unit testing code with the final project, right? It's not like the user will be using it anyway. It just seems like it makes the project size unnecessarily larger.
My main question is: will all the Unit Testing code be compiled and exported with the final project or will I have to manually delete all the Unit Tests before exporting it?
Is there a best (or common) practice for Unit Testing and exporting projects?
If you're publishing a library, it's quite common to publish the unit tests. Imagine you're developing on Mac or Linux, for example, and someone wants to compile on Windows. Well, they should probably be able to run your tests to ensure they pass on a different environment. Or Android, or some microcontroller. Whatever.
Also, someone might decide to help you improve your project. They'll add a nifty-cool feature. It's nice if they can run your unit tests to make sure they don't break anything.
So yes, if you're publishing your project as source code, include the unit tests. If you're only publishing a compiled library, you can exclude them.
Any tests are used for maintaining source code.
When you do some changes in source code, the you should run test to verify that: new functionality is working properly and old functionality is didn't got broken.
If test tails this is signal for developer that something has to be fixed.
So if your application or library is shipped as exactable there is o point to ship tests. Why? When you have only executable you can't modify code, so no point to run tests.
If you are publishing a library as open source then you test should be published too.
If you are publishing your library only as executable, source code of test can be form of documentation. If test are written properly they will document how API can be used in every possible way and as since test are a code (formal language) this form of documentation can't be misinterpret.
Shipping tests as executable has no seance at all.
I recommend to watch Uncle Bob (Robert C. Martin) talks, in most of them he explains what are test for and why they are important.
I've heard of unit testing, and written some tests myself just as tests but never used any testing frameworks. Now I'm writing a wxPython GUI for some in-house data analysis/visualisation libraries. I've read some of the obvious Google results, like http://wiki.wxpython.org/Unit%20Testing%20with%20wxPython and its link http://pywinauto.openqa.org/ but am still uncertain where to start.
Does anyone have experience or good references for someone who sort of knows the theory but has never used any of the frameworks and has no idea how it works with GUIs?
I am on a Windows machine developing a theoretically cross-platform application that uses NumPy, Matplotlib, Newville's MPlot package, and wxPython 2.8.11. Python 2.6 with plans for 3.1. I work for a bunch of scientists, so there is no in-house unit-testing policy.
If you want to unit-test your application, you haven't to focus on GUI testing techniques. It is much better to write the application using MVC, MVP, or other meta-pattern like these. So you get business logic and presentation layer separated.
It is much more important to cover the business layer with tests since this is your code. Presentation layer is tested already by wxWidgets developers. To test the business layer it will be enough just basic tools like standard unittest module and maybe nose.
To make sure the whole application behave correctly, you should add few acceptance tests that will test functionality from end to end. These will deal with GUI, but there will be few such tests comparing to number of unit-tests.
If you will limit yourself with acceptance tests only, you'll get low coverage, fragile and very slow test code base.
To unit test your application without requiring lots of mock objects/stubs, your GUI's event handlers should basically delegate to other method calls, passing in values from the Event object as parameters to the delegated method.
Otherwise you'll be unable to test your application without having to mock wx's objects.
Take a look at the PyPubSub project for a great module to help with MVC.
In one early project of mine I really test wxPython application using GUI layer. So tests really spin live wxApp object, pops up real windows and then starts messing with a real MainLoop(). Very soon I realize it was a wrong way to do testing. My tests was run very slow and unreliable. Much better way is to separate GUI stuff aside and test only the "model" level of your application. Note that you can actually create model for presentation level logic (model that represent some visual part of your application) and test it. But this model should not involve any "real" gui objects (windows, dialogs, widgets).
We are developing applications for use within AutoCAD.
Basically we create a Class Library Project, and load the .dll in AutoCAD with a command (NETLOAD).
As so, we can use commands, "palettes", user controls, forms etc...
AutoDesk provides an API through some dll's, running in their program directory.
When referencing these dll's you can only call the dll's at runtime while loading your app in AutoCAD (This is a licensing security from AutoDesk).
For us, while developing, this is not a problem, we need to visually test within the context of AutoCAD, so we just set the Debug Properties so that they start acad.exe and load our dll with a script in the acad.exe parameters.
The problem is, when trying to unit test our code, NUnit or mstest are not running from within the AutoCAD context and they also cannot start it.
There exist a tool called Gallio, which has provided an interface with AutoCAD, so that it can run Unit test through IPC with Named Pipes.
However, this solution is, for me, too much of a hassle. I want to be able to quickly write tests without having to leave my beloved IDE.
So, what, from a "good design view" would be a good approach to this problem? I'm thinking I would basically need a testable codebase which is not referencing the AutoCAD dll's and a non-testable that does references the untestable AutoCAD dll's.
I'm sure there are ways to get this to work: ( IOC, DI, Adapter Pattern,. .) I just don't these principles in depth and thus I don't know which route will best suit my purposes and goals.
The first step is to triage your code for parts which need AutoCAD and parts which are really independent. Create unit tests for the independent parts as you usually would.
For the other parts, you need mockups which behave like AutoCAD. Make them as simple as possible (for example, just return the correct answers in the methods without doing any calculations). Now, you need several sets of classes:
A set of interfaces which your code uses to achieve something (for example, load a drawing).
A set of implementations for said set of interfaces which call the AutoCAD dlls.
A set of classes which try the implementations within the context of AutoCAD. Just create a small UI with a couple of buttons where you can run this code. It is used to reassure yourself that your mockups do the right thing. Log method parameters and results to some file so you can try how AutoCAD responds. If a mockup breaks, you can use this code to verify what AutoCAD is doing and you can use it as a reference when developing the mockups.
When you know how AutoCAD responds, create the mockups. In your tests, create them with the desired results (and errors, so you can test error handling, too). So when you have boolean loadDrawing(File filename), create a mockup which returns true for the filename exists.dxf and false for anything else.
Use a factory or DI to tell your application code which implementation to use. I tend to have a big global config class with a lot of public fields where I simply store the objects to use. I can set this up in the beginning, it's fast, it's easy to understand. If you need to create objects at runtime, then put factories in the config class which generate the objects for you, so you can swap them out.
I wrote ... and later broke ... a Test runner for AutoCAD. It is at https://github.com/CADbloke/CADtest. If you're interested in it nudge me along and I'll fix it faster. I am waiting for NUnit v3 release before I tackle it.
If you reset to the 3rd commit in that repo (I think) and fiddle with it from there it should run.
I mainly develop in native C++ on Windows using Visual Studio.
A lot of times, I find myself creating a new function/class or whatever, and I just want to test that piece of logic I just wrote, quickly.
A lot of times, I have to run the entire application, which sometimes could take a while since there are many connected parts.
Is there some sort of tool that will allow me to test that new piece of code quickly without having to run the whole application?
i.e.
Say I have a project with about 1000 files, and I'm adding a new class called Adder. Adder has a method Add( int, int );
I just want the IDE/tool to allow me to test just the Adder class (without me having to create a new project and write a dummy main.cpp) by allowing me to specify the value of the inputs going into Adder object. Likewise, it would be nice if it would allow me to specify the expected output from the tested object.
What would be even cooler is if the IDE/tool would then "record" these sets of inputs/expected output, and automatically create unit tester class based on them. If I added more input/output sets, it would keep building a history of input/outputs.
Or how about this: what if I started the actual application, feed some real data to it, and have the IDE/tool capture the complete inputs going into the unit being tested. That way, I can quickly restart my testing if I found some bugs in my program or I want to change its interface a bit. I think this feature would be so neat, and can help developer quickly test / modify their code.
Am I talking about mock object / unit testing that already exists?
Sidenote: it would be cool if Visual Studio debugger has a "replay" technology where user can step back to find what went wrong. Such debugger already exists here: http://www.totalviewtech.com/
It's very easy to get started with static unit testing in C++ - three lines of code.
VS is a bit poor in that you have to go through wizards to make a project to build and run the tests, so if you have a thousand classes you'd need a thousand projects. So for large projects on VS I've tended to organised the project into a few DLLs for independent building and testing rather than monolithic ones.
An alternative to static tests more similar to your 'poke and dribble' script could be done in python, using swig to bind your code to the interpreter, and python's doc tests . I haven't used both together myself. Again, you'd need a separate target to build the python binding, and another to run the tests, rather than it being just a simple 'run this class' button.
I would go with Boost.Test (see tutorial here)).
The idea would be to add a new configuration to your project, which would exclude from build all unnecessary cpp files. You would just have to add .cpp files to describe the tests you want to pass.
I am no expert in this area but i have used this technique in the past and it works !
I think you are talking about unit testing and mock objects. Here are couple of C++ mock object libraries that might be useful :-
googlemock which only works with googletest
mockpp
You are essentially asking how can I test one function instead of the whole application. That is what unit-testing is, and you will find many questions about unit-testing C++ on SO.
How do you unit test a large MFC UI application?
We have a few large MFC applications that have been in development for many years, we use some standard automated QA tools to run basic scripts to check fundamentals, file open etc. These are run by the QA group post the daily build.
But we would like to introduce procedures such that individual developers can build and run tests against dialogs, menus, and other visual elements of the application before submitting code to the daily build.
I have heard of such techniques as hidden test buttons on dialogs that only appear in debug builds, are there any standard toolkits for this.
Environment is C++/C/FORTRAN, MSVC 2005, Intel FORTRAN 9.1, Windows XP/Vista x86 & x64.
It depends on how the App is structured. If logic and GUI code is separated (MVC) then testing the logic is easy. Take a look at Michael Feathers "Humble Dialog Box" (PDF).
EDIT: If you think about it: You should very carefully refactor if the App is not structured that way. There is no other technique for testing the logic. Scripts which simulate clicks are just scratching the surface.
It is actually pretty easy:
Assume your control/window/whatever changes the contents of a listbox when the user clicks a button and you want to make sure the listbox contains the right stuff after the click.
Refactor so that there is a separate list with the items for the listbox to show. The items are stored in the list and are not extracted from whereever your data comes from. The code that makes the listbox list things knows only about the new list.
Then you create a new controller object which will contain the logic code. The method that handles the button click only calls mycontroller->ButtonWasClicked(). It does not know about the listbox or anythings else.
MyController::ButtonWasClicked() does whats need to be done for the intended logic, prepares the item list and tells the control to update. For that to work you need to decouple the controller and the control by creating a interface (pure virtual class) for the control. The controller knows only an object of that type, not the control.
Thats it. The controller contains the logic code and knows the control only via the interface. Now you can write regular unit test for MyController::ButtonWasClicked() by mocking the control. If you have no idea what I am talking about, read Michaels article. Twice. And again after that.
(Note to self: must learn not to blather that much)
Since you mentioned MFC, I assumed you have an application that would be hard to get under an automated test harness. You'll observe best benefits of unit testing frameworks when you build tests as you write the code.. But trying to add a new feature in a test-driven manner to an application which is not designed to be testable.. can be hard work and well frustrating.
Now what I am going to propose is definitely hard work.. but with some discipline and perseverance you'll see the benefit soon enough.
First you'll need some management backing for new fixes to take a bit longer. Make sure everyone understands why.
Next buy a copy of the WELC book. Read it cover to cover if you have the time OR if you're hard pressed, scan the index to find the symptom your app is exhibiting. This book contains a lot of good advice and is just what you need when trying to get existing code testable.
Then for every new fix/change, spend some time and understand the area you're going to work on. Write some tests in a xUnit variant of your choice (freely available) to exercise current behavior.
Make sure all tests pass. Write a new test which exercises needed behavior or the bug.
Write code to make this last test pass.
Refactor mercilessly within the area under tests to improve design.
Repeat for every new change that you have to make to the system from here on. No exceptions to this rule.
Now the promised land: Soon ever growing islands of well tested code will begin to surface. More and more code would fall under the automated test suite and changes will become progressively easier to make. And that is because slowly and surely the underlying design becomes more testable.
The easy way out was my previous answer. This is the difficult but right way out.
I realize this is a dated question, but for those of us who still work with MFC, the Microsoft C++ Unit Testing Framework in VS2012 works well.
The General Procedure:
Compile your MFC Project as a static library
Add a new Native Unit Test Project to your solution.
In the Test Project, add your MFC Project as a Reference.
In the Test Project's Configuration Properties, add the Include directories for your header files.
In the Linker, input options add your MFC.lib;nafxcwd.lib;libcmtd.lib;
Under 'Ignore Specific Default Libraries' add nafxcwd.lib;libcmtd.lib;
Under General add the location of your MFC exported lib file.
The https://stackoverflow.com/questions/1146338/error-lnk2005-new-and-delete-already-defined-in-libcmtd-libnew-obj has a good description of why you need the nafxcwd.lib and libcmtd.lib.
The other important thing to check for in legacy projects. In General Configuration Properties, make sure both projects are using the same 'Character Set'. If your MFC is using a Multi-Byte Character Set you'll need the MS Test to do so as well.
Though not perfect, the best I have found for this is AutoIt http://www.autoitscript.com/autoit3
"AutoIt v3 is a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting. It uses a combination of simulated keystrokes, mouse movement and window/control manipulation in order to automate tasks in a way not possible or reliable with other languages (e.g. VBScript and SendKeys). AutoIt is also very small, self-contained and will run on all versions of Windows out-of-the-box with no annoying "runtimes" required!"
This works well when you have access to the source code of the application being driven, because you can use the resource ID number of the controls you want to drive. In this way you do not have to worry about simulated mouse clicks on particular pixels. Unfortunately, in a legacy application, you may well find that the resource ID are not unique, which may cause problems. However. it is very straightforward to change the IDs to be unique and rebuild.
The other issue is that you will encounter timing problems. I do not have a tried and true solution for these. Trial and error is what I have used, but this is clearly not scalable. The problem is that the AutoIT script must wait for the test application to respond to a command before the script issues the next command or check for the correct response. Sometimes it is not easy to find a convenient event to wait and watch for.
My feeling is that, in developing a new application, I would insist on a consistent way to signal "READY". This would be helpful to the human users as well as test scripts! This may be a challenge for a legacy application, but perhaps you can introduce it in problematical points and slowly spread it to the entire application as maintenance continues.
Although it cannot handle the UI side, I unit test MFC code using the Boost Test library. There is a Code Project article on getting started:
Designing Robust Objects with Boost
Well we have one of these humongous MFC Apps at the workplace. Its a gigantic pain to maintain or extend... its a huge ball of mud now but it rakes in the moolah.Anyways
We use Rational Robot for doing smoke tests and the like.
Another approach that has had some success is to create a small product-specific language and script tests that use VBScript and some Control handle spying magic. Turn common actions into commands.. e.g. OpenDatabase would be a command that in turn will inject the required script blocks to click on Main Menu > File > "Open...". You then create excel sheets which are a series of such commands. These commands can take parameters too. Something like a FIT Test.. but more work. Once you have most of the common commands identified and scripts ready. It's pick and assemble scripts (tagged by CommandIDs) to write new tests. A test-runner parses these Excel sheets, combines all the little script blocks into a test script and runs it.
OpenDatabase "C:\tests\MyDB"
OpenDialog "Add Model"
AddModel "M0001", "MyModel", 2.5, 100
PressOK
SaveDatabase
HTH
Actually we have been using Rational Team Test, then Robot, but in recent discussions with Rational we discovered they have no plans to support Native x64 applications focusing more on .NET, so we decided to switch Automated QA tools. This is great but licensing costs don't allow us to enable it for all developers.
All our applications support a COM API for scripting, which we regression test via VB, but this tests the API no the application as such.
Ideally I would be interested on how people integrate cppunit and similar unit testing frameworks into the application at a developer level.