Unit testing installation of services - unit-testing

Our installer program is going to be installing a number of system services, under both Windows and UNIX, using JavaServiceWrapper. There will be a class responsible for creating JavaServiceWrapper config files, installing the services, etc.
Can I have some suggestions on how to unit-test this class?

I would not struggle too much with unit testing such a class, rather I would go for integration / smoke tests. You need these anyway to verify that your installation works properly - preferably not only on your own machine, but also in the target environment, in real life, before you are about to demonstrate it to your boss and most important client :-)
Update: I assume that the class in question would not contain much complicated logic, rather just gluing together different pieces supplied by other APIs. However, if this is not the case, and you feel you can't easily test a significant part of its functionality via integration tests, you can still try unit testing with good ol' mocks and/or dependency injection.

Lol! Found this last night. Environmentally Friendly Deployment. I really think as more complex your deployment, the more you need to validate your environment.

Related

How Do I Know That I'm Not Breaking Anything During Refactoring?

I've started my first experience in refactoring on huge system and writing unit tests for it, but I am just scared that I'm breaking the code without knowing it.
I studied the "the art of unit testing" and "working efficiently with legacy code" to find a solution, and my next plan is just stop refactoring for a while and write some integration testing(I have selected Fitnesse tool for integration testing purpose) to run them every time after I change some thing.
I am just wondering is there any other one with same experience? Do you think inetegration testing can be a good solution for this issue? Do you have any better idea?
I also checked this question (How can I check that I didn't break anything when refactoring?) but my situation is different with that, because there is no unit test available and I am here to write unit tests.
Integration testing is part of a good solution for refactoring. However some problems introduced by the refactoring will only show up when you have deployed the project.
A better idea would be to incorporate the integration testing into a continuous delivery strategy. This means you should have a clean and practical approach to build and deploy the project as often as possible to a near identical environment while refactoring it. The book Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment is a good resource. Here is one of the antipatterns it describes (Pages 7-9):
Antipattern: Deploying to a Production-like Environment Only after Development Is Complete
In this pattern, the first time the software is deployed to a
production-like environment (for example, staging) is once most of
the development work is done...
Once the application is deployed into staging, it is common for new
bugs to be found...
The remedy is to integrate the testing, deployment, and release
activities into the development process. Make them a normal and
ongoing part of development so that by the time you are ready to
release your system into production there is little to no risk,
because you have rehearsed it on many different occasions in a
progressively more production-like sequence of test environments. Make
sure everybody involved in the software delivery process, from the
build and release team to testers to developers, work together from
the start of the project.
At the end of the day, this is the problem of working with Legacy Code.
Integration Tests are your best bet, but to write those to correctly meet your needs, you would need to know the original intent of the original code, which often isn't as clear, because there are often hidden requirements.
There are no ideal solutions.
Although previous answers are very good, I'd like to add that unit tests are exactly for this. In our test project when we refactor each other components, its mandatory to run already existing unit tests prepared from initial developer + new ones before commit to the Version control. Besides - its a good approach to have smoke tests running on every check-in. An ofcourse - Integration, Regression etc. afterwards.
UPDATE
I'm in the exact same situation - chained to maintenance. Tools can vary greatly - depending of the needs. Starting from Web-, -Unit-Testing up to SOA- and Server-testing. If you provide more detailed info about your SUT I'll gladly try to help.

Setting up proper testing for Django for TDD

I've been ignoring the need to test my project for far to long.
So I spent more than a day looking for ways to implement tests for my current apps and trying to get some TDD going for new apps.
I found a lot of "tutorials" with the steps: "1. Install this 2. Install that 3. Install thisnthat 4. Done!",
but noone seems to talk about how to structure your tests, both file and code wise.
And noone ever talks about how to set up a CI server, or just integrate the testing with the deployment of your project.
A lot of people mention fabric, virtualenv and nose - but noone describes how they work with them together as a whole.
What I keep finding is detailed information about how you set up a proper Rails environment with testing and CI etc...
Does anyone else feel that the Django community lacks in this area, or is it just me? :)
Oh, and does anyone else have any suggestions on how to do it?
As I see it, there are several parts to the problem.
One thing you need are good unit tests. The primary characteristic of unit tests is that they are very fast, so that they can test the combinatorial possibilities of function inputs and branch coverage. To get their speed, and to maintain isolation between tests, even if they are running in parallel, unit tests should not touch the database or network or file system. Such tests are hard to write in Django projects, because the Django ORM makes it so convenient to scatter database access calls throughout your product code. Hence any tests of Django code will inevitably hit the database. Ideally, you should approach this by limiting the database access in your product code to a thin data access layer built on top of the django ORM, which exposes methods pertinent to your application. Another approach is for your tests to mock out the ORM calls. In the worst case, you will give up on this: Your unit tests become integration tests: They actually hit the database, cross multiple layers of your architecture, and take minutes to run, which discourages developers from running them frequently enough.
The implication of this is that writing integration tests is easy - the canonical style of Django tests covers this perfectly.
The final, and hardest part of the problem, is running your acceptance tests. The defining characteristic of acceptance tests is that they invoke your application end-to-end, as a user does in production, to prove that your application actually works. Canonical dhango tests using the django testrunner fall short of this. They do not issue actually HTTP requests (instead, they examine the url config to figure out what middleware and view would get called to handle a particular request, and then they call it, in process.) This means that such tests are not testing your webserver config, nor any javascript, or rendering in the browser, etc. To test this, you need something like selenium.
Additionally, we have many server-side processes, such as cron jobs, which use code from our Django project. Acceptance tests which involve these processes should invoke the jobs just like cron does, as a new process.
Both these scenarios have some problems. Firstly, you can't simply run such tests under the Django test runner. If you try to do so, then you will find that the test data you have written during the test setup (either using the django fixtures mechanism, or by simply calling "MyModel().save()" in a test) are in a transaction which your product code, running in a different process, is not party to. So your tests have to commit the changes they make before the product code can see them. This interferes with the clean-up between tests that Django's testrunner helpfully does, so you have to switch it into a different mode, which does explicit deletes rather than rolling back. Sadly, this is much slower. At last night's London Django user group, a couple of Django core developers assured me that this scenario also has other complications (which I confess I don't know what they are), which it is best to avoid by not running acceptance tests within the Django test runner at all, but creating them as a completely stand-alone set of tests.
If you do this, then your immediate problem is that you have lost the benefits the Django test runnner provides: Namely it creates a test database, and cleans it between each test. You will have to create some equivalent mechanism for yourself. You will need your product code to run against a test database if it is being invoked as part of a test. You need to be absolutely certain that if product code is run as part of a test, even on a production box, then it can NEVER accidentally touch the production database, so this mechanism has to be absolutely failsafe. Forgetting to set an environment variable in a test setup, for example, should not cause a blooper in this regard.
This is all before even considering the complications that arise from deployment, having parts of your project in different repos, dependent on each other, creating pip-installable packages, etc.
All in all, I'd love to hear from someone who feels they have found a good solution to this problem. It is far from a trivial issue as some commenters imply.
Harry Percival is creating a Django / TDD / Selenium tutorial (and accompanying workshop, if you live in London.) His book reads like a hands-on tutorial, and goes into great detail on the subject:
https://www.obeythetestinggoat.com/book/part1.harry.html
In my experience, fine-grained unit tests for web apps are not worth it, the setup/teardown is too expensive and the tests are too fragile. The only exception is isolated components, especially those with clear inputs & outputs and complicated algorithms. Do unit-test those to the smallest details.
I had the best testing experience using a semi-functional testing tool called testbrowser, which simulates browser actions in Python. For integration with Django, install the homophony app (disclaimer: I am the author of the app).
Testbrowser may be a little too coarse for test-driven development, but it's the best testing tool of the ones I have used so far. Most importantly, it scales up fairly well, whereas unit tests and browser-based functional test tools tend to become very brittle as your app grows in size.
As for a CI tool, go with Buildbot or Jenkins.
I use a combination of Django's excellent extension of the python unittest framework for testing api's / models / helper functions, and selenium for in browser testing. Selenium has great instructions for how to set it up and write tests in python.

Integration testing - can it be done right?

I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.

Unit/integration testing Asterisk configuration

Unit and integration testing is usually performed as part of a development process, of course. I'm looking for ways to use this methodology in configuration of an existing system, in this case the Asterisk soft PBX.
In the case of Asterisk, the configuration file is as much a programming language as anything else, complete with loops, jumps, conditionals, etc., and can get quite complex. Changes to the configuration often suffers from the same problems as changes to a complex software product - it can be hard to foresee all the effects without tests in place. It's made worse by the fact that the nature of the system is to communicate with external entities, i.e. make phone calls.
I have a few ideas about testing the system using call files (to create specific calls between extensions) while watching the manager interface for generated events. A test could then watch for an expected result, i.e. dialling *99# should result in the Voicemail application getting called.
The flaws are obvious - it doesn't test the actual result, only what the system thinks is the result, and it probably requires some modification of the system under test. It's also really hard to write these tests robustly enough to only trigger on the expected output, especially if the system is in use (i.e. there are other calls in progress).
Is what I want, a testing system for Asterisk, impossible? If not, do you have any ideas about ways to go about this in a reasonable manner? I'm willing to put a fair amount of development time into this and release the result under a friendly license, but I'm unsure about the best way to approach it.
This is obviously an old question, so there's a good chance that when the original answers were posted here that Asterisk did not support unit / integration testing to the extent that it does today (although the Unit Test Framework API went in on 12/22/09, so that, at least, did exist).
The unit testing framework (David's e-mail from the dev list here) lets you execute unit tests directly within Asterisk. Tests are registered with the framework and can be executed / viewed through the CLI. Since this is all part of Asterisk, the tests are compiled into the executable. You do have to configure Asterisk with the --enable-dev-mode option, and mark the tests for compilation using the menuselect tool (some applications, like app_voicemail, automatically register tests - but they're the minority).
Writing unit tests is fairly straight-forward - and while it (obviously) isn't as fully featured as a commercial unit test framework, it gets the job done and can be enhanced as needed.
That most likely isn't what the majority of Asterisk users are going to want to use - although Asterisk developers are highly encouraged to check it out. Both users and developers are probably interested in integration tests, which the Asterisk Test Suite provides. At its core, the Test Suite is a python script that executes other scripts - be they lua, python, etc. The Test Suite comes with a set of python and lua libraries that help to orchestrate and execute multiple Asterisk instances. Test writers can use third party applications such as SIPp or Asterisk interfaces (AMI, AGI) or a combination thereof to test the hosted Asterisk instance(s).
There are close to 200 tests now in the Test Suite, with more being added on a fairly regular basis. You could obviously write your own tests that exercise your Asterisk configuration and have them managed by the Test Suite - if they're generic enough, you could submit them for inclusion in the Test Suite as well.
Note that the Test Suite can be a bit tricky to set up - Leif wrote a good blog post on setting up the Test Suite here.
Well, it depends on what you are testing. There are a lot of ways to handle this sort of thing. My preference is to use Asterisk Call Files bundled with dialplan code. EG: Create a callfile to dial some public number, once it is answered, hop back to the specified dialplan context and perform all of my testing logic (play soundfiles, listen for keypresses, etc.)
I wrote an Asterisk call file library which makes this sort of testing EXTREMELY easy. It has a lot of documentation / examples too, check it out here: http://pycall.org/. That may help you.
Good luck!
You could create a set of specific scenarios and use Asterisk's MixMonitor command to record these calls. This would enable you to establish a set of sound recordings that were normative for your system for these tests, and use an automated sound file comparison tool (Perhaps something from comparing-sound-files-if-not-completely-identical?) to examine the results. Just an idea.
Unit testing as opposed to integration testing means your code is supposed to be architectured so the logic itself is insulated from external dependencies. You said "the configuration file is as much a programming language as anything else" but that's the thing --- real languages has not just control flow but abstraction capabilities, which allow you to write the logic in a way that can be unit tested. That's why I keep logic outside of asterisk as much as possible.
For integration testing, script linphonec to drive your application, and grep the asterisk console to see what it's doing.
You can use docker, and fire up temporary asterisk instances for each test.

What is the best framework for Unit Testing in JavaME?

What is currently the best tool for JavaME unit testing? I´ve never really used unit testing before (shame on me!), so learning curve is important. I would appreciate some pros and cons with your answer. :)
I think it will depend on what kind of tests are you planning to do. Will you be using continuous integration. Is running tests on handsets a must.
If tests are more logic/data processing tests, the you can do fine with JUnit. But if you need to use some classes from javax.microedition.*, then the things will become a bit tricky, but not impossible.
Some examples: a test for text wrapping on screen, that would need javax.microedition.lcdui.Font. You can't just crate a font class using jars shipped with WTK, because at initialization it will be calling some native methods, that are not available.
For these kind of tests I have created a stub implementation of J2ME. Basically these are my own interpretation of J2ME classes. I can set some preconditions there (for example every character is 5 pixels wide, etc). And it is working really great, because my test only need to know, how J2ME classes respond, not how they are internally implemented.
For networking tests I have used MicroEmulator's networking implementation, and it has also worked out well.
One more issue with unit tests - it is better to have your mobile projects as a java project using Java 4,5,6, because writing test in 1.3 is, at leas for me, a pain in the...
I belive, that starting with JUnit will be just fine, to get up and running. And if some other requirements come up (running tests on handsets), then You can explore alternatives.
I'll be honest, the only unit tester I've used in Java is JUnit and a child project for it named DBUnit for database testing... which I'm assuming you won't need under J2ME.
JUnit's pretty easy to use for unit testing, as long as your IDE supports it (Eclipse has support for JUnit built in). You just mark tests with the #Test annotation (org.junit.Test I think). You can also specify methods that should be run #Before or #After each test, as well as before or after the entire class (with #BeforeClass and #AfterClass).
I've never used JUnit under J2ME, though... I work with J2EE at work.
Never found an outstanding one. You can try to read this document :
how to use it
and here the link to : download it
Made by sony ericsson but work for any J2ME development.
And I would recommend you spend some time learning unit testing in plain Java before attacking unit testing on the mobile platform. This may be a to big to swallow one shoot.
Good luck!
There's a framework called J2MEUnit that you could give a try, but it doesn't look like it's still being actively developed:
http://j2meunit.sourceforge.net