C++ Unit Testing based on fork() - c++

So I'm interested in doing some unit testing of a library that interacts with a kernel module. To do this properly, I'd like to make sure that things like file handles are closed at the end of each test. The only way this really seems possible is by using fork() on each test case. Is there any pre-existing unit test framework that would automate this?
An example of what I would expect is as follows:
TEST() {
int x = open("/dev/custom_file_handle");
TEST_ASSERT_EQUAL(x, 3);
}
TEST() {
int y = open("/dev/other_file_handle");
TEST_ASSERT_EQUAL(x, 3);
}
In particular, after each test, the file handles were closed, which means that the file descriptor should likely be the same value after each test.
I am not actually testing the value of the file descriptor. This is just a simple example. In my particular case, only one user will be allowed to have the file descriptor open at any time.
This is targeting a Linux platform, but something cross platform would be awesome.

Google Test does support forking the process in order to test it. But only as "exit" and/or "death" tests. On the other hand, there is nothing to prevent you from writing every test like that.
Ideally, though, I would recommend that you approach your problem differently. For example, using the same Google Test framework, you can list test cases and run them separately, so writing a simple wrapper that invokes each binary multiple times to run different test will solve your problem. Fork has its own problems, you know.

The Check unit testing library for C by default executes each test in a separate child process.
It also supports two different kinds of fixtures - ones that are executed before/after each test - in the child process - (called 'checked') and ones that are executed before/after a test-suite - in the parent process - (called 'unchecked' fixtures).
You can disable the forking via the environment variable CK_FORK=no or an API call - e.g. to simplify debugging an issue.
Currently, libcheck runs under Linux, Hurd, the BSDs, OSX and different kinds of Windows (mingw, non-mingw etc.).

Related

Would it be possible to forward MS Unit tests from an EXE to a DLL?

I have an application where there is a mess of code that has a bunch of non-isolated components. This makes things difficult in terms of doing some unit testing. So along with some unit tests in their own separate test DLL, I'm trying to also create some tests within the application DLL. The application DLL is normally invoked from an application EXE.
For some background, this code is 20+ years old written in native C++. I cannot execute the tests in the DLL directly as the framework is not setup, so any calls executed within the DLL will not execute correctly. I've unsuccessfully tried to do this already, but maybe I need a more fundamental understanding of the MFC framework to do this.
A colleague suggested that maybe it might be possible to have the vstest.console somehow run the tests through the EXE where the framework can be brought up, run the tests through the EXE, which are then forwarded to the DLL, and then have the test results returned back through the EXE to vstest.console, effectively making the EXE a proxy of sorts.
I'm thinking that this might be a longshot, but I'm at a loss as how I can run the tests in the DLL properly. Could it be done? Is there a better way?
For legacy EXE, you may use Generic Test (for console app) or Coded UI Test (for GUI app). Technically, Generic Test or Coded UI Test is System Level Test. You can still get Code Coverage for the two tests.
More on Generic Test
Use a generic test to wrap a console app or test harness console that
• Runs from a command line
• Returns error code: 0 <- Pass; Nonzero <- Fail
• Positive tests only for a console app; A test harness may include negative tests within.
Visual Studio treats generic test just like other tests, but
• Add Generic Tests into Unit Test Project type
• Command line must run .GenericTest file instead of UnitTest1.dll
• vstest.console GenericTest1.GenericTest
NOTE: Set the “run duration” long enough for your EXE.

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

Should I write unit tests as console apps first?

I'm debugging a set of WCF services. Initially, I created some unit tests, but since I'm using threading I often receive "Aborted" or "Stopped" tests without any clear explanation why (this is a known bug in Visual Studio).
I found it extremely challenging to debug the services when I can't even read the log output, so I quickly wrote a custom Assert class and converted all unit tests to console applications. This way, I was able to fix a huge number of simple problems immediately that were hard to impossible before.
So I'm wondering if it is a good idea to write unit tests as (fully automated) console apps first and convert them to real (executes when launching unit tests in VS) tests later.
if you want to stick to the stand alone console app you can have a one fits all aproach: Change
the application type of the MsUnitTest (or NUnitTest) to "Console application"
add a public static void Main() that call your unittests you are interested in.
This exe is can run on its own or it runs in the unittest-ide.
I prefer a standalone consolerunner as described in how-do-i-use-mstest-without-visual-studio

How to interface with an executable in C++

I have an executable that I need to run some tests on in C++ - and the testing is going to take place on all of Windows, Linux and Mac OSes.
I was hoping for input on:
How would I interface with the previously built executable from my code? Is there some kind of command functionality that I can use? Also, since I think the commands change between OSes, I'd need some guidance in figuring out how I could structure for all three OSes.
EDIT - Interface = I need to be able to run the executable with a command line argument from my C++ code.
The executable when called from the commandline also ouputs some text onto a console - how would I be able to grab that ouput stream (I'd need to record those outputted values as part of my tests).
Feel free to ask me follow up questios.
Cheers!
If you use qt to develop your code, you'll find QProcess will allow you to spawn a command line program in a platform-agnostic way.
Essentially:
QObject *parent;
QString program = "yourcommandlineprogram";
QStringList arguments;
QProcess *myProcess = new QProcess(parent);
myProcess->start(program, arguments);
You can then read from the process with various function calls such as readAllStandardOutput (), and write to the input of the process with QProcess::write(QString).
Alternatively, if you prefer Boost to Qt, Boost.Process will also let you launch processes. I confess I don't like the syntax as much...
boost::process::command_line cl("yourcommandlineprogram");
cl.argument("someargument");
boost::process::launcher l;
l.set_stdout_behavior(bp::redirect_stream);
l.set_merge_out_err(true);
l.set_work_directory(dir);
boost::process::child c = l.start(cl);
You can then work with your subprocess 'c' by using stream operators << and >> to read and write.
All those OSes support some form of "subprocess" calling technique, where your tester creates a new child process and executes the code under test there. You get to not only pass a command line, but also have the opportunity to attach pipes to the child process' standard input and output streams.
Unfortunately, there is no standard C++ API to create child processes. You'll have to find the appropriate API for each OS. For example, in Windows you could use the CreateProcess function: MSDN: Creating Processes (Windows).
See also Stackoverflow: How do you spawn another process in C?
As I understand, you want to:
Spawn a new process with arguments not known at runtime.
Retrieve the information printed to stdout by the new process.
Libraries such as QProcess can spawn processes, however, I would recommend doing it by hand for both Windows and MacOS/Linux as using QProcess for this case is probably overkill.
For MacOS/Linux, here's what I would do:
Set up a pipe in the parent process. Set the read end of the pipe to a new file descriptor in the parent.
fork.
In newly created child process, set stdout (file descriptor #1) to the write end of the pipe.
execvp in the newly created child process and pass the target executable along with what arguments you want to give it.
From the parent process, wait for the child (optional).
From the parent process, read from the file descriptor you indicated in Step 1.
First of all, is it possible that you simply need to want to make your original code reusable? In that case you can build it as library and link it in your new application.
If you really want to communicate with another executable then you can need start it as a subprocess of the main application. I would recommend the Process class of the Poco C++ libraries.
Looks like a job for popen(), available on Linux, Windows, and OS X
Sounds like you are only planning to do functional testing at the executable level. That is not enough. If you plane to do thorough testing, you should also write unit tests. For that there is some excellent frameworks. My prefered one (by far) for C++ is BOOST::Testing.
If you control source code there is also common tricks for functional testing beside launching exe from an external process : embed functional tests. You just add an option to your program that execute tests. This is cool because tests are embedded in code and autocheck can easily be launched in any execution environment.
That means that in the test environment, as you call your program with some test dedicated arguments, nothing keeps you from going the full way and redirect the content of stdout and even check the tests results from within the program. It will make the whole testing much easier than calling from an external launcher, then analysing the results from than launcher.

Can I insert a test into a qt event window?

I am attempting add some tests to an existing QT GUI application using QTest. The GUI uses quite a bit of complicated start-up code so I'd rather not write another main() to start it again.
To me, it seems like the easiest way would be instantiate the app and then run the tests on it. I am just not sure, however what function I could plug my test object into so that I don't block the flow of messages.
I could send a special message to start the test or set a timer but that's complicated and tests are supposed to simplify things.
So where would be the best place in existing GUI to insert and qexec a Qtest object?
I'm willing to be proven wrong, but test frameworks in general (and QTest specifically from what I've used of it) tend to assume that the test framework will be driving the code to be tested, as opposed to running along side of it.
I'm also concerned about "The GUI uses quite a bit of complicated start-up code...." Are you intending on testing the startup code? Or testing other stuff around it?
Generally speaking, when I start looking at adding tests to an application, I try to find smaller pieces that are used in a lot of the application, and write tests for those. I then build up to testing the bigger pieces that integrate those smaller ones. My general idea is that if the small pieces work properly, then either I've put them together correctly and things should work, or I haven't and things should obviously fail when I try to run the application.
I should mention that there are other options for testing GUIs for Qt applications in particular. They tend to be more like scripts run on your program, with the output recorded. If that interests you, then you could look into Squish for Qt.