Skip to content

- pascal

tags: - tools - qt - c++ - unittest - cmake - TODO


Unit Tests

Basics

This section covers the implementation of Qt Unit Tests along with CMake (and not QMake).

The advantage of Qt is to build UIs. The main goal here is to be able to launch tests without having the user to interact with it.

Preparation

First, we create a folder along the source of the program, containing one or multiple .cpp files for all our tests: ::

project
├── bin
├── res
├── src
│   ├── main.cpp
│   └── myinclude.h
├── tests
│   ├── mytest1.cpp
│   └── mytest2.cpp
└── CMakeLists.txt

We then need to modify our CMakeLists.txt to include the test files (note that only one global CMakeLists is used - it would be better to have one CMakeLists per level for bigger projects).

cmake_minimum_required(VERSION 2.8.11 FATAL_ERROR)
#set(CMAKE_PREFIX_PATH "D:/Qt/5.15.1/msvc2019_64")

project(MyProject)

enable_testing() # Add this for tests activation

set(CMAKE_AUTOMOC ON) # For meta object compiler
set(CMAKE_AUTORCC ON) # Resource files
set(CMAKE_AUTOUIC ON) # UI files
set(CMAKE_INCLUDE_CURRENT_DIR ON)

# Find the QtWidgets library
find_package(Qt5 REQUIRED Core Widgets)
find_package(Qt5Test REQUIRED) # Add this for tests activation

file(GLOB project_SOURCES
        "src/*.h"
        "src/*.hpp"
        "res/*.qrc"
        )

#Main app
add_executable(${PROJECT_NAME} "src/main.cpp" ${project_SOURCES})
target_link_libraries(${PROJECT_NAME} Qt5::Widgets)
#Unit tests
add_executable(MyTest1 "tests/mytest1.cpp" ${project_SOURCES}) # needed sources
target_link_libraries(MyTest1 Qt5::Widgets Qt5::Test) # needed includes for test, with Qt5::Test added

Once the project is set, we can dig in the implementation of our tests.

Tests implementation

A typical test file will look like this, having all tests implemented as private slots :

#include <QTest> // Needed to test
// Include the files required to do tests
#include "src/myinclude.h"

class MyTest1 : public QObject
{
    Q_OBJECT
private slots:
    void test1()
    {
        QVERIFY(1 > 0); // will pass
    }
    void test2()
    {
        QVERIFY(1 > 0); // pass
        QVERIFY(0 > 1); // fail
        // => finally, test2 did not pass
    }
};
//Include those final files AFTER the class to create a stand-alone exec
QTEST_MAIN(MyTest1)
#include "mytest1.moc" // auto-generated by Qt

It is possible to test multiple conditions inside the same test. If any of those fails, the test will be considered as failed.

Running the MyTest1 project will provide informations in the output log if tests pass or fail. Taken from Qt website, a typical output is like :

********* Start testing of TestQString *********
Config: Using QtTest library %VERSION%, Qt %VERSION%
PASS   : TestQString::initTestCase()
PASS   : TestQString::toUpper()
PASS   : TestQString::cleanupTestCase()
Totals: 3 passed, 0 failed, 0 skipped
********* Finished testing of TestQString *********

Multiple tests

When it is required to do the same test multiples times, it can quickly become unreadable and repetitive to type everything by hand.

To automate this, you need to create a new slot with the same method name, adding _data() to it.

We will here create an array containing all our data, and that will be automatically fed to our test (that needs to be rewrote) :

void test1()
{
    QFETCH(int, mybigint);
    QFETCH(int, mysmallint);
    QVERIFY(mybigint > mysmallint);
}
void test1_data()
{
    // Create columns
    QTest::addColumn<int>("mybigint");
    QTest::addColumn<int>("mysmallint");
    // Fill with rows
    QTest::newRown("bigger") << 5 << 2;
    QTest::newRown("bigger") << 5 << 5;
    QTest::newRown("bigger") << 2 << 5;
    // ...
}

GUI Events

Benchmark

  • It is possible to add a benchmark for parts of code that need measurement.
  • You simply need to use the QBENCHMARK{my code here} macro.

Skipping a test

  • If you need to do the tests in specific conditions only, you can use the QSKIP("Error message") macro.
  • Once encountered, the test will be skipped (e.g. if you need to have a specific revision of a lib to do the tests ...).
  • If the macro is put into the _data() function, it will skip all tests if reached.

Misc. Macros

  • QBENCHMARK{my code} : to benchmark a part of code
  • QBENCHMARK_ONCE{my code} : to run the benchmark with only one pass
  • QCOMPARE(actual, expected) : compare two variables; print both values to see the error when any occurs
  • QEXPECT_FAIL(dataIndex, comment, mode) : tells the next test will fail (due to a revision to be done, an error still persisting ...); dataIndex is for which test index (when using _data() function), else set as empty string / comment is what is output when encoutered / mode is Continue or Abort, depending if the rest of the code must be run or not
  • QFAIL(message) : force a failure
  • QFETCH(type, name) : to fetch values from a _data() function
  • QFETCH_GLOBAL(type, name) : to fetch values from a _data() function that does not correspond to current test
  • QFINDTESTDATE(filename) : look for given filepath and return a string with full path (empty if none found)
  • QSKIP(description) : skip the test
  • QTEST(actual, testElement) : same as QCOMPARE
  • QVERIFY(condition) : check the condition is true
  • QVERIFY(condition, message) : as QVERIFY, but outputs the message when condition is false
  • QVERIFY_EXCEPTION_THROWN(expression, exceptiontype) : execute the expression, and tries to catch the exceptiontype - if corresponds (or is subclass of), the test continues
  • QWARN(message) : appends a warning message to the output