What approach should I use to test a VBScript? - unit-testing

I've been asked to help out with a project which has made extensive use of VBScript to process a whole bunch of text files and generate certain outputs - sanitized files, SQL entries etc.. The script is going to be scheduled to run as a Scheduled Task with operation based on the parameters passed to the script. There's no user interface.
Are there any tools out there that I can use to automate the testing?
Can I write unit tests that target specific functions within the script without executing the script's start up code etc..?

It sounds like you should be looking at tools at the acceptance/functional/system level rather than unit level.
A good match for what it sounds like you're trying to achieve might be (I've never used it in production) TextTest. It will allow you to run your scripts and analyse the text that being returned, the documentation is fairly thorough and there are decent tutorials.
It's impossible to answer whether or not you can run the scripts without the startup code, but it should be possible to refactor that code away into separate files/routines (either scripts or ini files or a combination of both) and get test specific versions returning canned answers which will allow you to isolate the methods under test. This is the same principle as with any test setup.

Related

How to list available test classes

Is there a way to get Gradle (1.12) to list all of the available unit test classes in a project?
I'm considering putting a front-end on a series of tests we use in my company, and since new tests are always being added, I need a way to get a list of available tests.
I realize that I could scan the actual project for classes that reside in the test sources tree, but I was hoping for something easily parsed from Gradle. I just don't know if that's really an option and I'm having trouble getting decent search results since "test" is such a generic word.
Any help would be appreciated.
There is no official API in Gradle to expose this information. You can check if ClassScanner.java is what you need. Either look at Gradle sources or it is also used in EclipseTestExecuter.java Keep in mind that it is an implementation detail.
A simpler approach is to run these tests and enable logging where you will print names of executed tests. There is an example how to this in Gradle documentation I think.

Framework for collecting unit test output from disparate executables?

I've become responsible for cleaning up an old unit-testing environment. The existing environment contains a ton of executables (1000+ shell scripts, compiled binaries, etc) and each one returns code 0 or 1 and some output depending on the results. Tests can also time out. A set of PERL scripts goes through, runs each executable file, and collects the results into a big XML file which gets rendered into a web page. This system works great, but isn't very extensible or fast.
In addition to cleaning/speeding this up, I would like to implement concurrent testing. Right now the tests run one at a time. Many of the tests require resource locks (ports, files, etc) and there's no listing of which are safe to run simultaneously. An option here is to run each one in a VM of some kind.
Is there a framework or tool designed for this type of situation? What's the best way to approach it if I have to write my own brand new system? My limitation is that I cannot change the 1000+ test executables. I was thinking something like PyUnit with Python unit tests that use subprocess or similar to execute the existing tests and convert the results into a Pythonic format. Is this overkill? Could such a system support isolation of processes to avoid deadlocks and race conditions between tests? In a worst-case, I was thinking of using Python to SSH into one of several VM's to run the tests.
Any help or guidance would be appreciated.
Edit: Python isn't strictly necessary. Another solution might just break the test set into several M-sized chunks and run each test in an independent VM over SSH until all M-sized chunks are done. I'm hoping there's a tool out there that solves this problem.
There is no out-of-the-box or customize it for your needs solution of which I am aware to solve the problem you are facing.
Looking at your problem, there are several distinct needs which standout:
Test Tagging
Test Execution
Test Result Capture
The first issue you need to address is how are you going to identify and track the tests that you can execute in a given environment, concurrently, etc.
If you were using nose (i.e. is nicer testing for python), you would be able to use the Attribute selector plugin to tag the tests with various attributes.
nose would also assist you in your test execution and when coupled with test tagging, would allow you to run tests in parallel, etc. Using nose, you should be able to run an external executable and assert based on its status code.
The final problem you face is how to capture test output in a proprietary format and translate it to a format that can be ingested to readily available tools. Again, I believe nose could help you out here. You could build a nose plugin that would take your proprietary format and translate it to XUnit format and report results that way.
With all of the above in mind, here is how I would tackle this situation:
Create a test wrapper class based on nose which
Can be tagged
Execute a program and capture result output
Translate that output to XUnit
Create a wrapper for each test
Figure out how to automate this process, because it is going to be tedious
Build a test execution harness, which
Spins Up one or more VMs
Loads and runs a test wrapper
Capture the results

Unit/integration testing Asterisk configuration

Unit and integration testing is usually performed as part of a development process, of course. I'm looking for ways to use this methodology in configuration of an existing system, in this case the Asterisk soft PBX.
In the case of Asterisk, the configuration file is as much a programming language as anything else, complete with loops, jumps, conditionals, etc., and can get quite complex. Changes to the configuration often suffers from the same problems as changes to a complex software product - it can be hard to foresee all the effects without tests in place. It's made worse by the fact that the nature of the system is to communicate with external entities, i.e. make phone calls.
I have a few ideas about testing the system using call files (to create specific calls between extensions) while watching the manager interface for generated events. A test could then watch for an expected result, i.e. dialling *99# should result in the Voicemail application getting called.
The flaws are obvious - it doesn't test the actual result, only what the system thinks is the result, and it probably requires some modification of the system under test. It's also really hard to write these tests robustly enough to only trigger on the expected output, especially if the system is in use (i.e. there are other calls in progress).
Is what I want, a testing system for Asterisk, impossible? If not, do you have any ideas about ways to go about this in a reasonable manner? I'm willing to put a fair amount of development time into this and release the result under a friendly license, but I'm unsure about the best way to approach it.
This is obviously an old question, so there's a good chance that when the original answers were posted here that Asterisk did not support unit / integration testing to the extent that it does today (although the Unit Test Framework API went in on 12/22/09, so that, at least, did exist).
The unit testing framework (David's e-mail from the dev list here) lets you execute unit tests directly within Asterisk. Tests are registered with the framework and can be executed / viewed through the CLI. Since this is all part of Asterisk, the tests are compiled into the executable. You do have to configure Asterisk with the --enable-dev-mode option, and mark the tests for compilation using the menuselect tool (some applications, like app_voicemail, automatically register tests - but they're the minority).
Writing unit tests is fairly straight-forward - and while it (obviously) isn't as fully featured as a commercial unit test framework, it gets the job done and can be enhanced as needed.
That most likely isn't what the majority of Asterisk users are going to want to use - although Asterisk developers are highly encouraged to check it out. Both users and developers are probably interested in integration tests, which the Asterisk Test Suite provides. At its core, the Test Suite is a python script that executes other scripts - be they lua, python, etc. The Test Suite comes with a set of python and lua libraries that help to orchestrate and execute multiple Asterisk instances. Test writers can use third party applications such as SIPp or Asterisk interfaces (AMI, AGI) or a combination thereof to test the hosted Asterisk instance(s).
There are close to 200 tests now in the Test Suite, with more being added on a fairly regular basis. You could obviously write your own tests that exercise your Asterisk configuration and have them managed by the Test Suite - if they're generic enough, you could submit them for inclusion in the Test Suite as well.
Note that the Test Suite can be a bit tricky to set up - Leif wrote a good blog post on setting up the Test Suite here.
Well, it depends on what you are testing. There are a lot of ways to handle this sort of thing. My preference is to use Asterisk Call Files bundled with dialplan code. EG: Create a callfile to dial some public number, once it is answered, hop back to the specified dialplan context and perform all of my testing logic (play soundfiles, listen for keypresses, etc.)
I wrote an Asterisk call file library which makes this sort of testing EXTREMELY easy. It has a lot of documentation / examples too, check it out here: http://pycall.org/. That may help you.
Good luck!
You could create a set of specific scenarios and use Asterisk's MixMonitor command to record these calls. This would enable you to establish a set of sound recordings that were normative for your system for these tests, and use an automated sound file comparison tool (Perhaps something from comparing-sound-files-if-not-completely-identical?) to examine the results. Just an idea.
Unit testing as opposed to integration testing means your code is supposed to be architectured so the logic itself is insulated from external dependencies. You said "the configuration file is as much a programming language as anything else" but that's the thing --- real languages has not just control flow but abstraction capabilities, which allow you to write the logic in a way that can be unit tested. That's why I keep logic outside of asterisk as much as possible.
For integration testing, script linphonec to drive your application, and grep the asterisk console to see what it's doing.
You can use docker, and fire up temporary asterisk instances for each test.

Test Anything Protocol in Shell scripts

Has anyone seen, tried to implement, or otherwise played with TAP in shell? We're looking to create unit tests across many languages (don't get me started on why this doesn't exist so far), and since we have so much Perl code, we'll be looking at TAP (among others, I imagine). I've found a TAP library for C, Perl, of course, has it built-in, and I've even found an API for Java. But one area missing is shell script testing.
Not that I've found much on unit-testing shell scripts, either, but since we do have thousands of lines of shell code, it'd be nice to be able to test it somehow.
See the list of TAP Producers for a list of libraries. On that list you will find Tap-functions for shell code.
Bats is simple Bash only test framwork, tests could be written in a very clear syntax.
shUnit is the oldest shell test framework, little documentation.
shUnit2 is a most recently project inspired by shUnit, but completely different. Tests could be written in a more xUnit fashion. Most importantly, it is POSIX compatible.
I usually write my own small test framework for my shell scripts. Some things to keep in mind when doing this:
When working with files, make all paths relative to some variable which you can modify in your tests.
diff(1) is great to verify test results (and to display a useful error message to the user)
Use local variables extensively
Everything must be in a function
That said, my "test framework" is mostly a set of shell functions (named test*) and a runTests function which calls them one by one. Nothing fancy, really. Tests create a work directory for the test, copy all necessary files into it, run a function, verify the results against a know-good set of files.

Can any IDE or framework help test new code quickly without having to run the whole application

I mainly develop in native C++ on Windows using Visual Studio.
A lot of times, I find myself creating a new function/class or whatever, and I just want to test that piece of logic I just wrote, quickly.
A lot of times, I have to run the entire application, which sometimes could take a while since there are many connected parts.
Is there some sort of tool that will allow me to test that new piece of code quickly without having to run the whole application?
i.e.
Say I have a project with about 1000 files, and I'm adding a new class called Adder. Adder has a method Add( int, int );
I just want the IDE/tool to allow me to test just the Adder class (without me having to create a new project and write a dummy main.cpp) by allowing me to specify the value of the inputs going into Adder object. Likewise, it would be nice if it would allow me to specify the expected output from the tested object.
What would be even cooler is if the IDE/tool would then "record" these sets of inputs/expected output, and automatically create unit tester class based on them. If I added more input/output sets, it would keep building a history of input/outputs.
Or how about this: what if I started the actual application, feed some real data to it, and have the IDE/tool capture the complete inputs going into the unit being tested. That way, I can quickly restart my testing if I found some bugs in my program or I want to change its interface a bit. I think this feature would be so neat, and can help developer quickly test / modify their code.
Am I talking about mock object / unit testing that already exists?
Sidenote: it would be cool if Visual Studio debugger has a "replay" technology where user can step back to find what went wrong. Such debugger already exists here: http://www.totalviewtech.com/
It's very easy to get started with static unit testing in C++ - three lines of code.
VS is a bit poor in that you have to go through wizards to make a project to build and run the tests, so if you have a thousand classes you'd need a thousand projects. So for large projects on VS I've tended to organised the project into a few DLLs for independent building and testing rather than monolithic ones.
An alternative to static tests more similar to your 'poke and dribble' script could be done in python, using swig to bind your code to the interpreter, and python's doc tests . I haven't used both together myself. Again, you'd need a separate target to build the python binding, and another to run the tests, rather than it being just a simple 'run this class' button.
I would go with Boost.Test (see tutorial here)).
The idea would be to add a new configuration to your project, which would exclude from build all unnecessary cpp files. You would just have to add .cpp files to describe the tests you want to pass.
I am no expert in this area but i have used this technique in the past and it works !
I think you are talking about unit testing and mock objects. Here are couple of C++ mock object libraries that might be useful :-
googlemock which only works with googletest
mockpp
You are essentially asking how can I test one function instead of the whole application. That is what unit-testing is, and you will find many questions about unit-testing C++ on SO.