Qt test how to stop execution when a signal is emitted - c++

I am currently testing a Qt application. I have to build a test to check the correct input and output of csv files.
Problem:
The data is being read asynchronously and my test program is ending before the data is loaded and this is the output i get.
QFATAL: Received signal 11
FAIL! : Received a fatal error
Program flow:
There is a class AsyncLoader that loads the data. After the data read is finished, it emits a completed() signal.
So, I modified the test program to include an QEventLoop. The code is shown below
#pragma once
#include <QEventLoop>
#include <QSignalSpy>
#include "asyncloader.h"
#include "alphaevent.h"
#include "mainwindow.h"
#include <QtTest/QtTest>
class Test1: public QObject
{
Q_OBJECT
private slots:
void initTestCase();
void mainWindowTester();
void cleanupTestCase();
};
void Test1::initTestCase()
{
qDebug()<<"hello";
}
void Test1::mainWindowTester()
{
AlphaEvent *fs1 = new AlphaEvent(this);
fs1->setOperation(AlphaEvent::FileOpen);
fs1->setPath(QString("/home/user/PC5.csv"));
MainWindow *mw1 = new MainWindow();
QEventLoop loop;
loop.connect(mw1, SIGNAL(completed(FileEvent*)), SLOT(quit()));
mw1->dataSetIORequest(fs1);
loop.exec();
int pqr = mw1->_package->dataSet->rowCount();
int pqr1 = mw1->_package->dataSet->columnCount();
qDebug() << "pqr== "<< pqr;
qDebug() << "-----------------------------------------------";
QVERIFY(pqr==5);
void Test1::cleanupTestCase()
{
}
QTEST_MAIN(Test1)
#include "test1.moc"
But with this, I get a "subprocess error: FailedToStart"
Is there a way to test an asynchronous unit?
I am using Qt version 5.4.2, QMake version 3.0

I try to answer your question 'Is there a way to test an asynchronous unit?' rather than giving hints about how to do it in one framework or another.
The point is, that in unit-testing you are typically aiming at tests that produce deterministic results indpendent of whether you are running them on the development system or the target system. That is, you try to eliminate the influence of task switching on your tests. (Certainly, you also want to have the other kind of tests, but then you are in the realm of integration testing, and in the realm of nondeterministic test results).
To separate your code from the scheduler in unit-testing, you will likely use some of the following approaches:
Separate the logic from the synchronization. For example, if you have a synchronization point in the middle of a function, you could extract the code before and after the synchronization point in separate functions and test these functions separately.
Double the synchronization functions. For example you could create stubs or mocks for the mutex_lock function. Whenever your double is called, you can then make it simulate the changes that a parallel thread might have done in the meantime.
Many good aspects and links can be found here: How should I unit test threaded code?

Related

How to unit test a void function in C++

I am working on a hobby project mainly to learn cpp unit testing and database programming. However I am a little bit lost & confused about how should I write my code for proper testing. I tend to write a lot of void functions for my cpp projects. But now I can not figure out how should I test those functions. I have been succeeded in testing non-void functions cause they return something which can be easily tested against a value.
Ami I doing things in an unprofessional way? Should I avoid void functions as much as possible so that I can test those functions ? Or I am missing something ? For example how would I be able to test this function -
database.cpp
#include "database.hpp"
#include <sqlite3.h>
#include <iostream>
#include "spdlog/sinks/basic_file_sink.h"
// Creating the logging object
auto logger = spdlog::basic_logger_mt("appnotex", "../data/appnotexlog");
void Database::createDb(const char *dbname) {
// Creating the database file
sqlite3 *datadb;
int status = sqlite3_open(dbname, &datadb);
// checking for errors
if (status == SQLITE_OK) {
logger->info("------------ New Session ----------");
logger->info("Connected to Database Successfully");
} else {
std::string errorMessage = sqlite3_errmsg(datadb);
logger->info("Error: " + errorMessage);
}
If Needed
I am using Google Test framework
My whole project code hosted - here
Update
I have tried this one is this approach of testing the above method correct ?
databaseTest.cpp
TEST(DatabaseTest, createDbTest) {
const char *dbfilename = "../data/test/data.db";
const char *tbname = "DataTest";
Database *db = new Database();
std::ifstream dbfile("../data/test/data.db");
bool ok = false;
if (!dbfile.is_open())
ok = false;
else
ok = true;
EXPECT_TRUE(ok);
}
The problem is not so much in the function returning void. Think about how it signals errors and make sure all cases (success and failures) are tested, simple as that.
However, I don't see any error signalling at all there, apart from logging it. As a rule of thumb, logging should only be used for post-mortem research and the like. So, if logging completely fails, your program can still run correctly. That means, nothing internally depends on it and it is not a suitable error handling/signalling mechanism.
Now, there are basically three ways to signal errors:
Return values. Typically used in C code and sometimes used in C++ as well. With void return, that's not an option, and that is probably the source of your question.
Exceptions. You could throw std::runtime_error("DB connect failed"); and delegate handling it to the calling code.
Side effects. You could store the connection state in your Database instance. For completeness, using a global errno is also possible, but not advisable.
In any case, all three ways can be exercised and verified in unit tests.

BackgroundTask UWP C++ trigger only one time?

In windows runtime component project (BackgroundTask c++)
#include "pch.h"
#include "BackgroundTask.h"
using namespace Platform;
namespace SyncBackground {
void BackgroundTask::Run(IBackgroundTaskInstance^ taskInstance) {
_taskInstance = taskInstance;
taskInstance->Canceled += ref new BackgroundTaskCanceledEventHandler(this, &BackgroundTask::OnCanceled);
_deferral = taskInstance->GetDeferral();
OutputDebugString(L"Debug: CPP\r\n");
}
void BackgroundTask::OnCanceled(IBackgroundTaskInstance^ sender, BackgroundTaskCancellationReason reason) {
_deferral->Complete();
}
}
I try ApplicationTrigger from c# project, but OutputDebugString write only one times from first trigger. In the same BackgroundTask C#, Debug.WriteLine() write every trigger.
Then why in c++ do it only one times? And how make it work look like c# (i need send some data and command via trigger)
Thank
I need it still run background
If I understand correctly, you just want to run background tasks indefinitely. If so, even if you don't call TaskDeferral.Complete();, it won't still run background, after a period of time, it will still be terminated. In that case, you can refer to this document to configure. But it mentions if you use it, you can't put an app into the Microsoft Store. If not, please point me out.

Qt testing when dependent on Network

I'm working on a Qt project, and I need to be able to write unit tests for functions that depend on QNetworkAccessManager.
Google Mock seems like an overkill for my purposes, and I found this answer which suggest using a "linker trick" to mock the class. However, I'm very new to C++ (and C in general), and I'm having somewhat hard time in understanding the exact way I'm supposed to use this "trick". Am I supposed to manually change the header file to run the test, or is there some nicer way to do it (I'm assuming there is).
Any kind of an example on the header/code structure to do this correctly would be an immense help.
You could use linker tricks, but as QNetworkAccessManager can be subclassed, you might find it easier just to do that.
For example, if you want to make a version that doesn't actually connect, you could do something like:
class FailQNetworkAccessManager : public QNetworkAccessManager
{
Q_OBJECT
public:
FailQNetworkAccessManager(QObject *parent = Q_NULLPTR):QNetworkAccessManager(parent){}
protected:
QNetworkReply* createRequest(Operation op, const QNetworkRequest &originalReq, QIODevice *outgoingData = Q_NULLPTR)
{
QNetworkReply* rep = QNetworkAccessManager::createRequest(op, originalReq, outgoingData);
// Queue the abort to occur from main loop
QMetaObject::invokeMethod(req, "abort", Qt::QueuedConnection);
return rep;
}
};
Then your test code can provide your class with the FailQNetworkAccessManager rather than the real one, and all requests should abort as soon as they're created. (This is just example code, I haven't actually tried this code yet - I would also recommend splitting this into header & cpp files).
You should also have a look at the Qt Test system, which is the built in test framework.

Monitor/Output emitted Qt-Signals

I defined some signals which are emitted on different occasions:
signals:
void buttonXClicked(int x);
void numButtonsChanged(int num);
Now I would just like to see how these signals look like and if the parameters are correct. It seems there are several approaches to monitor the signals.
In the post here rohanpm refers to the parameter -vs which is specified closer here:
http://qt-project.org/doc/qt-4.8/qtestlib-manual.html#qtestlib-command-line-arguments
This seems to be an elegant and quick way of getting the information I require.
But to be honest I'm unable to understand how and where I have to run -vs. It's not part of qmake. Where else do I have to put it? (I'm pretty new to qt).
Related to the QSignalSpy it seems to be necessary to change the existing classes? Isn't there an "external" approach as well?
There is plenty of documentation around how to test a slot - but related to signals? Could I use a printf or cout somewhere?
I got this idea while reading more about the moc and its functionality. (At least while using NetBeans) I get additional to my File ButtonTest.cpp the file moc_ButtonTest.cpp. Inside is a method called:
// SIGNAL 0
void ButtonTest::buttonXClicked(int _t1)
{
void *_a[] = { 0, const_cast<void*>(reinterpret_cast<const void*>(&_t1)) };
QMetaObject::activate(this, &staticMetaObject, 0, _a);
}
I could hardly believe it was so easy but I've just added a
std::cout <<"buttonXClicked: "<<_t1;
and it seems to do exactly what I want.
As the linked documentation writes:
Runs the toUpper test function with all available test data, and the toInt test function with the testdata called zero (if the specified test data doesn't exist, the associated test will fail).
/myTestDirectory$ testMyWidget -vs -eventdelay 500
where testMyWidget is the test binary built. Here goes the -vs documentation:
-vs
outputs every signal that gets emitted
There is also some more documentation if you grep the help output:
/myTestDirectory$ testMyWidget --help | grep "\-vs"
-vs outputs every signal that gets emitted
If you happen to have trouble with writing QTestLib based unit tests, this is a good starting point for you with Qt 4:
QTestLib Manual

What unit-testing framework should I use for Qt? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am just starting up a new project that needs some cross-platform GUI, and we have chosen Qt as the GUI-framework.
We need a unit-testing framework, too. Until about a year ago we used an in-house developed unit-testing framework for C++-projects, but we are now transitioning to using Google Test for new projects.
Does anyone have any experience with using Google Test for Qt-applications? Is QtTest/QTestLib a better alternative?
I am still not sure how much we want to use Qt in the non-GUI parts of the project - we would probably prefer to just use STL/Boost in the core-code with a small interface to the Qt-based GUI.
EDIT: It looks like many are leaning towards QtTest. Is there anybody who has any experience with integrating this with a continous integration server? Also, it would seem to me that having to handle a separate application for each new test case would cause a lot of friction. Is there any good way to solve that? Does Qt Creator have a good way of handling such test cases or would you need to have a project per test case?
You don't have to create separate tests applications. Just use qExec in an independent main() function similar to this one:
int main(int argc, char *argv[])
{
TestClass1 test1;
QTest::qExec(&test1, argc, argv);
TestClass2 test2;
QTest::qExec(&test2, argc, argv);
// ...
return 0;
}
This will execute all test methods in each class in one batch.
Your testclass .h files would look as follows:
class TestClass1 : public QObject
{
Q_OBJECT
private slots:
void testMethod1();
// ...
}
Unfortunately this setup isn't really described well in the Qt documentation even though it would seem to be quite useful for a lot of people.
I started off using QtTest for my app and very, very quickly started running into limitations with it. The two main problems were:
1) My tests run very fast - sufficiently quickly that the overhead of loading an executable, setting up a Q(Core)Application (if needed) etc often dwarfs the running time of the tests themselves! Linking each executable takes up a lot of time, too.
The overhead just kept on increasing as more and more classes were added, and it soon became a problem - one of the goals of unit tests are to have a safety net that runs so fast that it is not a burden at all, and this was rapidly becoming not the case. The solution is to glob multiple test suites into one executable, and while (as shown above) this is mostly do-able, it is not supported and has important limitations.
2) No fixture support - a deal-breaker for me.
So after a while, I switched to Google Test - it is a far more featureful and sophisticated unit testing framework (especially when used with Google Mock) and solves 1) and 2), and moreover, you can still easily use the handy QTestLib features such as QSignalSpy and simulation of GUI events, etc. It was a bit of a pain to switch, but thankfully the project had not advanced too far and many of the changes could be automated.
Personally, I will not be using QtTest over Google Test for future projects - if offers no real advantages that I can see, and has important drawbacks.
To append to Joe's answer.
Here's a small header I use (testrunner.h), containing an utility class spawning an event loop (which is, for example, needed to test queued signal-slot connections and databases) and "running" QTest-compatible classes:
#ifndef TESTRUNNER_H
#define TESTRUNNER_H
#include <QList>
#include <QTimer>
#include <QCoreApplication>
#include <QtTest>
class TestRunner: public QObject
{
Q_OBJECT
public:
TestRunner()
: m_overallResult(0)
{}
void addTest(QObject * test) {
test->setParent(this);
m_tests.append(test);
}
bool runTests() {
int argc =0;
char * argv[] = {0};
QCoreApplication app(argc, argv);
QTimer::singleShot(0, this, SLOT(run()) );
app.exec();
return m_overallResult == 0;
}
private slots:
void run() {
doRunTests();
QCoreApplication::instance()->quit();
}
private:
void doRunTests() {
foreach (QObject * test, m_tests) {
m_overallResult|= QTest::qExec(test);
}
}
QList<QObject *> m_tests;
int m_overallResult;
};
#endif // TESTRUNNER_H
Use it like this:
#include "testrunner.h"
#include "..." // header for your QTest compatible class here
#include <QDebug>
int main() {
TestRunner testRunner;
testRunner.addTest(new ...()); //your QTest compatible class here
qDebug() << "Overall result: " << (testRunner.runTests()?"PASS":"FAIL");
return 0;
}
I don't know that QTestLib is "better" than one framework for another in such general terms. There is one thing that it does well, and that's provide a good way to test Qt based applications.
You could integrate QTest into your new Google Test based setup. I haven't tried it, but based on how QTestLib is architected, it seems like it would not be too complicated.
Tests written with pure QTestLib have an -xml option that you could use, along with some XSLT transformations to convert to the needed format for a continuous integration server. However, a lot of that depends on which CI server you go with. I would imagine the same applies to GTest.
A single test app per test case never caused a lot of friction for me, but that depends on having a build system that would do a decent job of managing the building and execution of the test cases.
I don't know of anything in Qt Creator that would require a seperate project per test case but it could have changed since the last time I looked at Qt Creator.
I would also suggest sticking with QtCore and staying away from the STL. Using QtCore throughout will make dealing with the GUI bits that require the Qt data types easier. You won't have to worry about converting from one data type to another in that case.
Why not using the unit-testing framework included in Qt?
An example : QtTestLib Tutorial.
I unit tested our libraries using gtest and QSignalSpy. Use QSignalSpy to catch signals. You can call slots directly (like normal methods) to test them.
QtTest is mostly useful for testing parts that require the Qt event loop/signal dispatching. It's designed in a way that each test case requires a separate executable, so it should not conflict with any existing test framework used for the rest of the application.
(Btw, I highly recommend using QtCore even for non-GUI parts of the applications. It's much nicer to work with.)
To extend mlvljr's and Joe's solution we can even support complete QtTest options per one test class and still run all in a batch plus logging:
usage:
help: "TestSuite.exe -help"
run all test classes (with logging): "TestSuite.exe"
print all test classes: "TestSuite.exe -classes"
run one test class with QtTest parameters: "TestSuite.exe testClass [options] [testfunctions[:testdata]]...
Header
#ifndef TESTRUNNER_H
#define TESTRUNNER_H
#include <QList>
#include <QTimer>
#include <QCoreApplication>
#include <QtTest>
#include <QStringBuilder>
/*
Taken from https://stackoverflow.com/questions/1524390/what-unit-testing-framework-should-i-use-for-qt
BEWARE: there are some concerns doing so, see https://bugreports.qt.io/browse/QTBUG-23067
*/
class TestRunner : public QObject
{
Q_OBJECT
public:
TestRunner() : m_overallResult(0)
{
QDir dir;
if (!dir.exists(mTestLogFolder))
{
if (!dir.mkdir(mTestLogFolder))
qFatal("Cannot create folder %s", mTestLogFolder);
}
}
void addTest(QObject * test)
{
test->setParent(this);
m_tests.append(test);
}
bool runTests(int argc, char * argv[])
{
QCoreApplication app(argc, argv);
QTimer::singleShot(0, this, SLOT(run()));
app.exec();
return m_overallResult == 0;
}
private slots:
void run()
{
doRunTests();
QCoreApplication::instance()->quit();
}
private:
void doRunTests()
{
// BEWARE: we assume either no command line parameters or evaluate first parameter ourselves
// usage:
// help: "TestSuite.exe -help"
// run all test classes (with logging): "TestSuite.exe"
// print all test classes: "TestSuite.exe -classes"
// run one test class with QtTest parameters: "TestSuite.exe testClass [options] [testfunctions[:testdata]]...
if (QCoreApplication::arguments().size() > 1 && QCoreApplication::arguments()[1] == "-help")
{
qDebug() << "Usage:";
qDebug().noquote() << "run all test classes (with logging):\t\t" << qAppName();
qDebug().noquote() << "print all test classes:\t\t\t\t" << qAppName() << "-classes";
qDebug().noquote() << "run one test class with QtTest parameters:\t" << qAppName() << "testClass [options][testfunctions[:testdata]]...";
qDebug().noquote() << "get more help for running one test class:\t" << qAppName() << "testClass -help";
exit(0);
}
foreach(QObject * test, m_tests)
{
QStringList arguments;
QString testName = test->metaObject()->className();
if (QCoreApplication::arguments().size() > 1)
{
if (QCoreApplication::arguments()[1] == "-classes")
{
// only print test classes
qDebug().noquote() << testName;
continue;
}
else
if (QCoreApplication::arguments()[1] != testName)
{
continue;
}
else
{
arguments = QCoreApplication::arguments();
arguments.removeAt(1);
}
}
else
{
arguments.append(QCoreApplication::arguments()[0]);
// log to console
arguments.append("-o"); arguments.append("-,txt");
// log to file as TXT
arguments.append("-o"); arguments.append(mTestLogFolder % "/" % testName % ".log,txt");
// log to file as XML
arguments.append("-o"); arguments.append(mTestLogFolder % "/" % testName % ".xml,xunitxml");
}
m_overallResult |= QTest::qExec(test, arguments);
}
}
QList<QObject *> m_tests;
int m_overallResult;
const QString mTestLogFolder = "testLogs";
};
#endif // TESTRUNNER_H
own code
#include "testrunner.h"
#include "test1"
...
#include <QDebug>
int main(int argc, char * argv[])
{
TestRunner testRunner;
//your QTest compatible class here
testRunner.addTest(new Test1);
testRunner.addTest(new Test2);
...
bool pass = testRunner.runTests(argc, argv);
qDebug() << "Overall result: " << (pass ? "PASS" : "FAIL");
return pass?0:1;
}
If you are using Qt, I would recommend using QtTest, because is has facilities to test the UI and is simple to use.
If you use QtCore, you can probably do without STL. I frequently find the Qt classes easier to use than the STL counterparts.
I've just been playing around with this. The main advantage of using Google Test over QtTest for us is that we do all our UI development in Visual Studio. If you use Visual Studio 2012 and install the Google Test Adapter you can get VS to recognise the tests and include them in its Test Explorer. This is great for developers to be able to use as they write code, and because Google Test is portable we can also add the tests to the end of our Linux build.
I'm hoping in the future that someone will add support for C++ to one of the concurrent testing tools that C# have, like NCrunch, Giles and ContinuousTests.
Of course, you might find someone writes another adapter for VS2012 that adds QtTest support to Test Adapter in which case this advantage goes away! If anyone is interested in this there's a good blog post Authoring a new Visual studio unit test adapter.
For Visual Studio test adapter tool support with the QtTest framework use this Visual Studio extension: https://visualstudiogallery.msdn.microsoft.com/cc1fcd27-4e58-4663-951f-fb02d9ff3653