I've spent the last few days looking around for an existing solution to a functional testing problem, but I am out of ideas and would appreciate some SO help!
I've got a preexisting suite of functional networking tests currently written in C++ using Boost.Test and Google Test, but might become rewritten into Rust soon. These generally take the following form:
unit test fixture {
1. Start a thread representing "the server" which goes and listens on some localhost port for incoming network connections.
2. Do client stuff representing "the client" to that localhost port.
3. Join the server thread, fetching an errors or problems.
4. Exit with success or failure.
}
This is great, and it works well. However it only tests loopback and in the real world the server component is probably in its own process running behind a NAT routed network, so it's not particularly realistic and therefore not really testing the code. What I think I'm looking for is some method of splitting the server thread part off into its own process, and then some method of getting the server test process and the client test process to work together to run the functional tests. Ideally speaking the server and client processes would run in separate "machines", this is something I can automate using OpenVZ scripting fairly easily so consider that problem out of scope, though it makes forking the process non-ideal.
I had been thinking that in this age of Web 2.0 et al surely this is a very common functional test scenario, and therefore that established patterns and test frameworks would abound. I mean, I as an old timer thinks "DCOM" as my first thought on how to solve this, though that's a 1990s Microsoft only solution. And maybe there is some modern and portable equivalent, and I am not searching for the right terms here, so here is my first question:
Is there any standard functional testing library or framework which extends Google Test or Boost.Test etc which lets you easily choose at runtime whether the server and client parts of each functional test should run as threads or as processes or best of all, as processes inside their own virtual machine with its own network stack?
This test scenario is surely common as muck. But let's assume it isn't, and no such tool exists. If it doesn't, then we need to extend Boost.Test or Google Test with some extra support. Firstly, we need to associate with each test fixture a supporting "server" part test fixture, and for the threaded test scenario we need to always run the server and client test fixtures concurrently. So, my second question:
Is there any way of strongly associating two test fixtures in any of the popular C++ or Rust unit testing frameworks where the two fixtures are seen as two halves of the same test, and always executed concurrently?
This leaves the second part: how to get a unit test framework to execute only the client parts in one process and only the server parts in the other process, and to do both always concurrently and in sync with one another, and moreover to merge the junit XML output from both parts into a single test result. So:
Is there any alternative functional testing approach, methodology, or open source solution which is better suited for distributed network functional testing than unit test frameworks such as Google Test or Boost.Test? Preferably something libvirt aware so it can orchestrate virtual machines as part of the testing setup and teardown? For example, is there some Jenkins plugin or something which could use Jenkins slaves in each OpenVZ container to orchestrate the concurrent execution of the multiple parts of each of the functional tests? Or is old fashioned CORBA still the least worst solution here? Is there maybe some way of automatically wrapping up test fixtures into a REST HTTP API?
I did do a quick review of the major integration testing frameworks, so Citrus, STAF and Twister. I'll be honest in saying they all seem way overkill for what I want which is a quick and easy way of making the existing functional test suite use a more realistic network routing than loopback. That's all I really want essentially, and I don't care how it's done so long as the check and requires still appear in Jenkins. Over to you Stackoverflow!
My thanks in advance for any help.
I have had similar requirements but I am come from the Java side of the world. What you can easily do is having a distributed management of nodes / machines using jGroups.
Once you understand how it works you can build a distributed system of nodes by just using 100 lines of code. With this system you can span and control child processes on each of those systems and check output and everything yourself. Should only cost you a day to take a jGroup example and get this running.
Once you have the infrastructure copy code and execute it as independent process your control is easy. Then use some of those nodes and get Selenium up and use a number of browser windows and execute scripts (or use Sikuli) and do your testing. Since the Selenium process is again Java you can generate all kind of reports you print to console or send directly to the cluster since those processes can join the cluster using jGroups too.
Such a system can be implemented in a week and it is really under your control. Very simple to do and very extendable.
Also you can provide plugins for Jenkins, Jira or Quality Center to interact with it and trigger test execution and management.
Related
I’m having problem to grasp how to do TDD when building a client-server system.
The simple katas (Fizzbuzz etc) are easy to understand, but when my client needs to send the server a file using TCP sockets and get a response back from the server I’m getting confused regarding how to test that.
I had a project building a file-sync system. The client will monitor a folder and every time a change happens (new file, file deletion etc.) the server should update automatically.
The client can have many devices, for example I can have a copy of the folder in two different computers and they all should sync perfectly.
I started the project with tests, but once I reached the part of speaking with the server I got stuck and didn’t understand how to implement tests.
Most of the things I find regarding TDD are the simple stuff. I would love your advice on this slightly more complex application.
I’m having problem to grasp how to do TDD when building a client-server system.
The reference you want is Growing Object Oriented Software, Guided by Tests
I started the project with tests, but once I reached the part of speaking with the server I got stuck and didn’t understand how to implement tests.
Basic idea: you are trying to work towards a design where you can separate the complicated code from the code that is hard/expensive to test.
This often means three "modules"
A really simple module that knows how to talk to the network
A complicated module that knows how to prepare messages for the network, and how to interpret the responses (and timeouts)
A module that can coordinate the interaction of the two modules above.
The first module, you "test" using code review, acceptance testing, and taking advantage of the fact that it doesn't change very often (because it is so simple).
The second module, you use lots of programmer tests to make sure that the logic correctly handles all of the different messages that can pass through it
The third module, you concentrate on testing the protocol. Here, we'll often use a substitute implementation (aka a mock or some other flavor of test double) for one or both of the first two modules.
In a language with types like Java or C#, the need for substitutes will often mean that the first two modules will need to implement some interface, and the third module will depend on those interfaces rather than having direct dependencies on the implementations.
You'll likely also need some code in your composition root that wires together the actual implementations.
For a good take on separating the networking client from the networking logic/protocol, see Cory Benfield 2016.
It might also be useful to review:
Boundaries, by Gary Bernhardt
At the Boundaries, Applications Aren't Object Oriented by Mark Seemann
Put the client code that is working with the socket into a separate class that can be injected in the "business code". For your tests inject a mock instead, verifying that the API of the "client socket adapter" is called in the apropriate way. Mocking libraries make this easy.
Put the server code that is working with the socket into a separate class and design an internal API for the "business code" that the "server socket adapter" is calling. Ignore the adapter in your tests and call the API of the business code directly.
You might want to read about the Ports & Adapter architecture (sometimes also called the "Hexagonal Model").
I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.
Unit and integration testing is usually performed as part of a development process, of course. I'm looking for ways to use this methodology in configuration of an existing system, in this case the Asterisk soft PBX.
In the case of Asterisk, the configuration file is as much a programming language as anything else, complete with loops, jumps, conditionals, etc., and can get quite complex. Changes to the configuration often suffers from the same problems as changes to a complex software product - it can be hard to foresee all the effects without tests in place. It's made worse by the fact that the nature of the system is to communicate with external entities, i.e. make phone calls.
I have a few ideas about testing the system using call files (to create specific calls between extensions) while watching the manager interface for generated events. A test could then watch for an expected result, i.e. dialling *99# should result in the Voicemail application getting called.
The flaws are obvious - it doesn't test the actual result, only what the system thinks is the result, and it probably requires some modification of the system under test. It's also really hard to write these tests robustly enough to only trigger on the expected output, especially if the system is in use (i.e. there are other calls in progress).
Is what I want, a testing system for Asterisk, impossible? If not, do you have any ideas about ways to go about this in a reasonable manner? I'm willing to put a fair amount of development time into this and release the result under a friendly license, but I'm unsure about the best way to approach it.
This is obviously an old question, so there's a good chance that when the original answers were posted here that Asterisk did not support unit / integration testing to the extent that it does today (although the Unit Test Framework API went in on 12/22/09, so that, at least, did exist).
The unit testing framework (David's e-mail from the dev list here) lets you execute unit tests directly within Asterisk. Tests are registered with the framework and can be executed / viewed through the CLI. Since this is all part of Asterisk, the tests are compiled into the executable. You do have to configure Asterisk with the --enable-dev-mode option, and mark the tests for compilation using the menuselect tool (some applications, like app_voicemail, automatically register tests - but they're the minority).
Writing unit tests is fairly straight-forward - and while it (obviously) isn't as fully featured as a commercial unit test framework, it gets the job done and can be enhanced as needed.
That most likely isn't what the majority of Asterisk users are going to want to use - although Asterisk developers are highly encouraged to check it out. Both users and developers are probably interested in integration tests, which the Asterisk Test Suite provides. At its core, the Test Suite is a python script that executes other scripts - be they lua, python, etc. The Test Suite comes with a set of python and lua libraries that help to orchestrate and execute multiple Asterisk instances. Test writers can use third party applications such as SIPp or Asterisk interfaces (AMI, AGI) or a combination thereof to test the hosted Asterisk instance(s).
There are close to 200 tests now in the Test Suite, with more being added on a fairly regular basis. You could obviously write your own tests that exercise your Asterisk configuration and have them managed by the Test Suite - if they're generic enough, you could submit them for inclusion in the Test Suite as well.
Note that the Test Suite can be a bit tricky to set up - Leif wrote a good blog post on setting up the Test Suite here.
Well, it depends on what you are testing. There are a lot of ways to handle this sort of thing. My preference is to use Asterisk Call Files bundled with dialplan code. EG: Create a callfile to dial some public number, once it is answered, hop back to the specified dialplan context and perform all of my testing logic (play soundfiles, listen for keypresses, etc.)
I wrote an Asterisk call file library which makes this sort of testing EXTREMELY easy. It has a lot of documentation / examples too, check it out here: http://pycall.org/. That may help you.
Good luck!
You could create a set of specific scenarios and use Asterisk's MixMonitor command to record these calls. This would enable you to establish a set of sound recordings that were normative for your system for these tests, and use an automated sound file comparison tool (Perhaps something from comparing-sound-files-if-not-completely-identical?) to examine the results. Just an idea.
Unit testing as opposed to integration testing means your code is supposed to be architectured so the logic itself is insulated from external dependencies. You said "the configuration file is as much a programming language as anything else" but that's the thing --- real languages has not just control flow but abstraction capabilities, which allow you to write the logic in a way that can be unit tested. That's why I keep logic outside of asterisk as much as possible.
For integration testing, script linphonec to drive your application, and grep the asterisk console to see what it's doing.
You can use docker, and fire up temporary asterisk instances for each test.
I am trying to build a test program in c++ to automate testing for a specific application. The testing will involve sending requests which have a field 'CommandType' and some other fields to a server
The commandType can be 'NEW', 'CHANGE' or 'DELETE'
The tests can be
Send a bunch of random requests with no pattern
Send 100 'NEW' requests, then a huge amount of 'CHANGE' requests followed by 200 'DELETE' requests
Send 'DELETE' requests followed by 'CHANGE' requests
... and so on
How can I design my software (what kind of modules or layers) so that adding any new type of test case is easy and modular?
EDIT: To be more specific, this test will be to only test one specific application that gets requests of the type described above and handles them. This will be a client application that will send the requests to the server.
I would not create your own framework. There are many already written that follow a common pattern and can likely accomodate your needs elegantly.
The xUnit framework in all incarnations I have seen allows you to add new test cases without having to edit the code that runs the tests. For example, CppUnit provides a macro that when added to a test case will auto-register the test case with a global registry (through static initialization I assume). This allows you to add new test cases without cracking open and editing the thing that runs them.
And don't let the "unit" in xUnit and CppUnit make you think it is inappropriate. I've used the xUnit framework for all different kinds of testing.
I would separate out each individual test into it's own procedure or, if it requires code beyond a function or two, it's own source file. Then in my main routine I'd do something like:
void main()
{
run_test_1();
run_test_2();
//...
run_test_N();
}
Alternatively, I'd recommend leveraging the Boost Test Library and following their conventions.
I'm assuming you're not talking about creating unit tests.
IMHO, Your question is too vague to provide useful answers. Is this to test a specific application or are you trying to make something generic enough to test as many different applications as is possible? Where do these applications live? Are they client server apps, web apps, etc.?
If it's more than one application that you want your tool to test, you'll need an architecture that creates a protocol in between the testing tool and the applications such that you can convert the instructions your tool and consumers of your tool can understand, into instructions that the application being tested can understand. I've done similar things in the past but I've only ever had to worry about maybe 5 different "applications" so it was a pretty simple matter of summing up all the unique functionality of the apps and then creating an interfact that supports them all.
I wouldn't presume that NEW, CHANGE, and DELETE would be your only command types either. A lot of testing involves data cleanup, test reporting, etc. And applications all handle this their own special ways.
use C++ unit testing framework , Read this for Detail and examples
I'm working on a very large, data-intensive legacy application. Both the code base & database are massive in scale. A great deal of the business logic is spread across all of the tiers including in stored procedures.
Does anybody have any suggestions on how to begin applying "unit" tests (technically integration tests because they need to test across tiers for a single aspect of almost any given process) into the application in an efficient way? The current architecture does not easily support any type of injection or mocking. New code is being written to facilitate testing, but what about the legacy code? Because of the strong dependency on the data itself and business logic in the database, I'm currently using inline sql to find data to use for testing but these are time consuming. Creating views and/or stored procedures will not suffice.
What approaches have you taken (if applicable)? What worked? What didn't & why? Any suggestions would be appreciated. Thanks.
Get a copy of Working Effectively with Legacy Code by Michael Feathers. It is full of useful advice for working with large, untested codebases.
Another good book is Object Oriented Reengineering Patterns. Most of the book is not specific to object-oriented software. The full text is available for free download in PDF format.
From my own experience: try to...
Automate the build and deployment
Get the database schema into version control, if it isn't yet. Usually databases include reference data that the transactional code needs to exist before it can work. Get this under version control too. Tools like dbdeploy can help you easily rebuild a schema and reference data from a sequence of deltas.
Install a version of the database (and any other infrastructure services) onto your development workstation. This will let you work on the database without continually having to go through DBAs. It's also faster than using a schema on a shared server in a remote datacentre. All major commercial database servers have free (as in beer) development versions that work on Windows (if you're stuck in the unenviable situation of developing on Windows and deploying on Unix).
Before starting work on an area of the code, write end-to-end tests that roughly cover the behaviour of the area you're working on. An end-to-end test should exercise the system from outside -- by controlling its user interface or interacting through network services -- so you won't need to change the code to put it into place. It will act as an (imperfect) regression test and give you more confidence to refactor the internals of the system towards a structure that is easier to unit test.
If there are manual test plans, read them and see what can be automated. Most manual test plans are almost entirely scripted and so are low-hanging fruit for automation
Once you've got end-to-end tests coverage, refactor the code into more loosely coupled units as you modify and/or extend it. Surround those units with unit tests.
Things to avoid:
Copying data from the production database into the environment you use for automated testing. This will make your tests unpredictable. Sure, run the system against a copy of production data, but use that for exploratory testing, not regression testing.
Rolling back transactions at the end of tests to isolate tests from one another. This will not test behaviour that only happens when transactions are committed, and will throw away data that is valuable for diagnosing test failures. Instead, tests should ensure the database is in a known initial state when they start.
Creating a "tiny" data set for tests to run against. This makes tests hard to understand because they cannot be read as a single unit. The "tiny" data set soon grows very large as you add tests for different scenarios. Instead, tests can insert data into the database to set up the test-fixture.
“Testing Legacy Application Modernization,” highlights:
High level overview of how tests are created in AscentialTest
Ways to convert the legacy objects to the new platform Components of Object definition
How to ensure that the modernized version of the application produces the same results
For more details on usage of testing legacy application, do check here:
http://application-management.cioreview.com/whitepaper/testing-legacy-application-modernization-wid-529.html
As mentioned before, there are some very good books out there. Highly recommended to take a look at Working Effectively with Legacy Code.
Something you could do is following a data driven approach, observe your application and introduce tests where you have more “pain”. A semi-deterministic approach you might find useful: https://link.medium.com/zY9Tysfne9