architecture/design advise for a test program - c++

I am trying to build a test program in c++ to automate testing for a specific application. The testing will involve sending requests which have a field 'CommandType' and some other fields to a server
The commandType can be 'NEW', 'CHANGE' or 'DELETE'
The tests can be
Send a bunch of random requests with no pattern
Send 100 'NEW' requests, then a huge amount of 'CHANGE' requests followed by 200 'DELETE' requests
Send 'DELETE' requests followed by 'CHANGE' requests
... and so on
How can I design my software (what kind of modules or layers) so that adding any new type of test case is easy and modular?
EDIT: To be more specific, this test will be to only test one specific application that gets requests of the type described above and handles them. This will be a client application that will send the requests to the server.

I would not create your own framework. There are many already written that follow a common pattern and can likely accomodate your needs elegantly.
The xUnit framework in all incarnations I have seen allows you to add new test cases without having to edit the code that runs the tests. For example, CppUnit provides a macro that when added to a test case will auto-register the test case with a global registry (through static initialization I assume). This allows you to add new test cases without cracking open and editing the thing that runs them.
And don't let the "unit" in xUnit and CppUnit make you think it is inappropriate. I've used the xUnit framework for all different kinds of testing.

I would separate out each individual test into it's own procedure or, if it requires code beyond a function or two, it's own source file. Then in my main routine I'd do something like:
void main()
{
run_test_1();
run_test_2();
//...
run_test_N();
}
Alternatively, I'd recommend leveraging the Boost Test Library and following their conventions.

I'm assuming you're not talking about creating unit tests.
IMHO, Your question is too vague to provide useful answers. Is this to test a specific application or are you trying to make something generic enough to test as many different applications as is possible? Where do these applications live? Are they client server apps, web apps, etc.?
If it's more than one application that you want your tool to test, you'll need an architecture that creates a protocol in between the testing tool and the applications such that you can convert the instructions your tool and consumers of your tool can understand, into instructions that the application being tested can understand. I've done similar things in the past but I've only ever had to worry about maybe 5 different "applications" so it was a pretty simple matter of summing up all the unique functionality of the apps and then creating an interfact that supports them all.
I wouldn't presume that NEW, CHANGE, and DELETE would be your only command types either. A lot of testing involves data cleanup, test reporting, etc. And applications all handle this their own special ways.

use C++ unit testing framework , Read this for Detail and examples

Related

TDD on a client-server application

I’m having problem to grasp how to do TDD when building a client-server system.
The simple katas (Fizzbuzz etc) are easy to understand, but when my client needs to send the server a file using TCP sockets and get a response back from the server I’m getting confused regarding how to test that.
I had a project building a file-sync system. The client will monitor a folder and every time a change happens (new file, file deletion etc.) the server should update automatically.
The client can have many devices, for example I can have a copy of the folder in two different computers and they all should sync perfectly.
I started the project with tests, but once I reached the part of speaking with the server I got stuck and didn’t understand how to implement tests.
Most of the things I find regarding TDD are the simple stuff. I would love your advice on this slightly more complex application.
I’m having problem to grasp how to do TDD when building a client-server system.
The reference you want is Growing Object Oriented Software, Guided by Tests
I started the project with tests, but once I reached the part of speaking with the server I got stuck and didn’t understand how to implement tests.
Basic idea: you are trying to work towards a design where you can separate the complicated code from the code that is hard/expensive to test.
This often means three "modules"
A really simple module that knows how to talk to the network
A complicated module that knows how to prepare messages for the network, and how to interpret the responses (and timeouts)
A module that can coordinate the interaction of the two modules above.
The first module, you "test" using code review, acceptance testing, and taking advantage of the fact that it doesn't change very often (because it is so simple).
The second module, you use lots of programmer tests to make sure that the logic correctly handles all of the different messages that can pass through it
The third module, you concentrate on testing the protocol. Here, we'll often use a substitute implementation (aka a mock or some other flavor of test double) for one or both of the first two modules.
In a language with types like Java or C#, the need for substitutes will often mean that the first two modules will need to implement some interface, and the third module will depend on those interfaces rather than having direct dependencies on the implementations.
You'll likely also need some code in your composition root that wires together the actual implementations.
For a good take on separating the networking client from the networking logic/protocol, see Cory Benfield 2016.
It might also be useful to review:
Boundaries, by Gary Bernhardt
At the Boundaries, Applications Aren't Object Oriented by Mark Seemann
Put the client code that is working with the socket into a separate class that can be injected in the "business code". For your tests inject a mock instead, verifying that the API of the "client socket adapter" is called in the apropriate way. Mocking libraries make this easy.
Put the server code that is working with the socket into a separate class and design an internal API for the "business code" that the "server socket adapter" is calling. Ignore the adapter in your tests and call the API of the business code directly.
You might want to read about the Ports & Adapter architecture (sometimes also called the "Hexagonal Model").

Can autotests reuse code of the application?

Unit tests are strongly connected to the existing code of the application, written by developers. But what about UI and API automated tests (integration tests)? Does anybody think, that it's acceptable to re-use code of the application in separate automation solution?
The answer would be no. UI tests follow the UI, go to that page, input that value in that textbox, press that button, I should see this text. You don't need anything code related for this. All this should be done against some acceptance criteria so you should already know what to expect, without looking at any code.
For the API integration tests you would call the endpoints, with some payloads and then check the results. You don't need references to any code for this. The API should be documented and explain very well what endpoints are available, what the payloads look like and what you can expect to get back.
I am not sure why you'd think about reusing application code inside an automation project.
Ok, so after clarifications, you're talking about reusing models only, not actual code. This is not a bad idea, it can actually help, as long as these nuget packages do not bring in any other dependencies.
Code Reusability is a great concept but it's very hard to get right in practice. Models typically come with adnotations which require other packages which are of course not needed in an automation project. So, if you can get nuget packages without extra dependencies, so literally data models only and nothing else then that does work. Anything more than that and it will create issues so I'd push back on that

Explanation of the differences between testing tools in PlayFramework 2 (WithApplication, WithServer, WithBrowser, InMemory etc...)

I am new to web application development, and even more so with Play Framework. My goal is to ensure my application is well tested, following Test Driven Development principles.
Play provides in its docs several means of testing a Play application, and often times I have difficulty in deciding which kinds of tests I should do, and which ones I can do without.
1) testing controllers vs WithApplication vs WithServer
option 1 is to test controllers as plain unit test
option 2 is to test the route using WithApplication and FakeRequest (knowing that the route calls the controller function, this approach feels more complete than option 1)
option 3 is to use WithServer with WS to make a request and await a response (this feels very similar to option 2, except it's using a real server)
Is testing with option 3 just a redundancy over testing with option 2? Can one be discarded in favor of the other?
2) in memory DB vs real DB
the in-memory DB (H2) does not seem to support some Postgres functionalities
testing against in-memory DB does not reflect a connection to a real database
Following the reasons above, I feel like testing with in-memory DB can result in uncaught bugs. Now, I understand that using a real DB is no longer called unit testing, as there are external dependencies. But is unit testing really something we want in this case?
3) WithBrowser (Selenium)
The advantages of this approach are clear, and likely irreplaceable (right?)
Seems like i am missing something when it comes to testing web applications, and clarification would be greatly appreciated.
WithApplication is for testing with a Play application. It's not strictly needed for testing routing/invoking controllers etc, they can all be tested without a running application (except for when they can't - some things rely on global state, but this is something that we are gradually fixing in Play). WithApplication I think is useful for when you want to test all your components working together. By using WithApplication, you let Play instantiate and wire everything together for you, which may be a lot easier than setting it up manually yourself in your tests.
WithServer has a number of interesting use cases. For one, it's more thorough integration testing than WithApplication, if you invoke a controller with a fake request, a lot of short cuts are taken, whereas invoking a controller with a real request over the wire doesn't take any shortcuts. Another interesting use case is testing HTTP client code - you may want to make sure that your HTTP client actually makes HTTP requests that make sense, so you setup some mock controllers with a mock router, and run them with WithServer. Finally, WithServer may be useful if you want to test an actual client to a REST API that you've written, talking to the actual service.
Whether you use an in memory database or a real database for testing is a question of hot debate, and Play is not opinionated here, it gives you the necessary tools for doing both. Some people like to use database abstractions tools, and keep their database access database agnostic. The motivations for this can be wide and varied, and certainly one that comes to play is so that unit testing can be done with in memory databases. Testing with in memory databases offers a lot of advantages, you can instantiate a new database for every test, ensuring test isolation - this is the biggest problem I've seen with running tests against a real database. You can also run your tests in parallel, they are usually faster, and they can run on any platform without any infrastructure setup. Of course, testing against a different database to production does open the possibility for bugs to slip through - but then, anything short of testing every permutation of every possible input and output opens the possibility for bugs to slip through, so all testing is imperfect at best, and a balance has to be achieved between test coverage and convenience of writing and maintainability of tests. So, for some, the advantages of testing against an in memory database outweighs the disadvantages. And then of course, there's people that like to take advantage of database specific features, for these, in memory database testing will be impossible. It's not hard to write test code against a real database in Play, I've done it a lot.

Distributing unit testing across virtual machines

I've spent the last few days looking around for an existing solution to a functional testing problem, but I am out of ideas and would appreciate some SO help!
I've got a preexisting suite of functional networking tests currently written in C++ using Boost.Test and Google Test, but might become rewritten into Rust soon. These generally take the following form:
unit test fixture {
1. Start a thread representing "the server" which goes and listens on some localhost port for incoming network connections.
2. Do client stuff representing "the client" to that localhost port.
3. Join the server thread, fetching an errors or problems.
4. Exit with success or failure.
}
This is great, and it works well. However it only tests loopback and in the real world the server component is probably in its own process running behind a NAT routed network, so it's not particularly realistic and therefore not really testing the code. What I think I'm looking for is some method of splitting the server thread part off into its own process, and then some method of getting the server test process and the client test process to work together to run the functional tests. Ideally speaking the server and client processes would run in separate "machines", this is something I can automate using OpenVZ scripting fairly easily so consider that problem out of scope, though it makes forking the process non-ideal.
I had been thinking that in this age of Web 2.0 et al surely this is a very common functional test scenario, and therefore that established patterns and test frameworks would abound. I mean, I as an old timer thinks "DCOM" as my first thought on how to solve this, though that's a 1990s Microsoft only solution. And maybe there is some modern and portable equivalent, and I am not searching for the right terms here, so here is my first question:
Is there any standard functional testing library or framework which extends Google Test or Boost.Test etc which lets you easily choose at runtime whether the server and client parts of each functional test should run as threads or as processes or best of all, as processes inside their own virtual machine with its own network stack?
This test scenario is surely common as muck. But let's assume it isn't, and no such tool exists. If it doesn't, then we need to extend Boost.Test or Google Test with some extra support. Firstly, we need to associate with each test fixture a supporting "server" part test fixture, and for the threaded test scenario we need to always run the server and client test fixtures concurrently. So, my second question:
Is there any way of strongly associating two test fixtures in any of the popular C++ or Rust unit testing frameworks where the two fixtures are seen as two halves of the same test, and always executed concurrently?
This leaves the second part: how to get a unit test framework to execute only the client parts in one process and only the server parts in the other process, and to do both always concurrently and in sync with one another, and moreover to merge the junit XML output from both parts into a single test result. So:
Is there any alternative functional testing approach, methodology, or open source solution which is better suited for distributed network functional testing than unit test frameworks such as Google Test or Boost.Test? Preferably something libvirt aware so it can orchestrate virtual machines as part of the testing setup and teardown? For example, is there some Jenkins plugin or something which could use Jenkins slaves in each OpenVZ container to orchestrate the concurrent execution of the multiple parts of each of the functional tests? Or is old fashioned CORBA still the least worst solution here? Is there maybe some way of automatically wrapping up test fixtures into a REST HTTP API?
I did do a quick review of the major integration testing frameworks, so Citrus, STAF and Twister. I'll be honest in saying they all seem way overkill for what I want which is a quick and easy way of making the existing functional test suite use a more realistic network routing than loopback. That's all I really want essentially, and I don't care how it's done so long as the check and requires still appear in Jenkins. Over to you Stackoverflow!
My thanks in advance for any help.
I have had similar requirements but I am come from the Java side of the world. What you can easily do is having a distributed management of nodes / machines using jGroups.
Once you understand how it works you can build a distributed system of nodes by just using 100 lines of code. With this system you can span and control child processes on each of those systems and check output and everything yourself. Should only cost you a day to take a jGroup example and get this running.
Once you have the infrastructure copy code and execute it as independent process your control is easy. Then use some of those nodes and get Selenium up and use a number of browser windows and execute scripts (or use Sikuli) and do your testing. Since the Selenium process is again Java you can generate all kind of reports you print to console or send directly to the cluster since those processes can join the cluster using jGroups too.
Such a system can be implemented in a week and it is really under your control. Very simple to do and very extendable.
Also you can provide plugins for Jenkins, Jira or Quality Center to interact with it and trigger test execution and management.

Best practices for unit testing integration with an unpredictable third party resource that you don't have control over

I've written an application that interacts with at least one third party resource (in my case it's a website and there are many websites that need to be tested against) that I do not have control over. Part of the interaction requires logging in with real user credentials and interacting with data that is transient.
As such, I have the following problems:
My test(s) must include private data (username and password) in order to log in
The data I'm looking for will no longer be valid anywhere from 24 to 48 hours after the identifier pointing at that data has been coded into the test(s) and new data matching the test scenario will need to be chosen
Which gives rise to the following questions:
How and where should I reference this data to prevent it from accidentally ending up in source control?
Is there a way to request this input as a precondition of running the relevant tests?
How should I deal with the problem that automated build scenarios will fail when the test data becomes outdated every couple of days?
What are the best practices for writing tests that deal with this sort of scenario?
I'm using Microsoft's unit testing framework with .Net 4.
I've run into this general class of problems before when writing automated tests. General they have to do with depending on an unmanaged resource. This resource can be a SOAP service, network or in your case a website.
My general approach is to have tests that can run in 2 modes.
Unmanaged Mode
I want to leverage the real resource during testing, this ensures that the code actually works with the actual resource. This is also useful when you need to extend the code for a new resource, or a change in the structure of the resource.
Managed Mode
I want to capture the transient data and use that as a fixture in the mock of the unmanaged resource. This ensures that the code still works against a particular real world example (although static), and it gives me the fine grained control that I get from using a managed (mock) resource. I can also run this test if the resource in question becomes unavailable for some time, or is simply unavailable in general (ie. being run from the build server).
It's not an exact solution for your problem, but have you considered testing against a mock website? i.e. write your own "website" that exists as part of the test framework, and behaves in a predictable and consistent manner. All it needs to do is respond to a minimal set of requests.
With this approach, you still demonstrate that your code works as expected (and doesn't regress, etc.), you can clearly identify the source of any regression as being from your code vs. the unpredictable 3rd-party stuff, and you eliminate the privacy concerns.