//Case I : ( It works but not sure if it is safe . Is it because the windows
messages are handle in a process queue already? )
void MyDlg::OnClickButton1()
{
std::thread([]()
{
// some long computation here
SetDlgItemText(IDC_STATIC_TEXT, L"Updated");
}).detach();
}
//Case II : ( It works . But is the process_queue redundant )
void MyDlg::OnClickButton1()
{
std::thread([]()
{
// some long computation here
command_node node =
command_factory("SetDlgItemText",IDC_STATIC_TEXT, "Updated");
SendMessageToMyProcessQueue(node);
}).detach();
}
void MyDlg::OnPaint()
{
ExecuteFromMyProcessQueue();
CDialogEx::OnPaint();
}
This is a sample snippet in VC++ using MFC and I want to use a worker thread to complete a task and send the result to a control. Which on is desirable or any other work around?
It is generally a good idea (or required) to refrain from accessing the GUI directly from other threads than the main thread. MFC might assert or it might not, depending on how consistent it is implemented. See also this answer. So that rules out your first case.
Using message queues is the safe and correct way to do it. See also this thread on how to update the UI from another thread.
Related
I am working on a project where I will be ingesting multiple binary files, decode them, and convert their data into a CSV. I figured the quickest way to do this would be to thread the work. Simply load the files into a queue, have the threads grab a file, work on it, convert it, output it, and then die.
What I wrote actually works great, however, I cannot figure out how to get the GUI to be responsive as I have a progress bar that I would like to update or simply have the user move the GUI to a corner while it processes the data. And I believe this is because std::thread is just hanging up the GUI.
In my code I have the following function once a button is pressed to execute:
void MyExtractor::on_Execute_clicked()
{
QStringList binary = tlmFiles.entryList(QStringList() << "*.bin",QDir::Files);
queue.clear();
threadPool.clear();
if(binary.size() != 0)
{
foreach(QString filename, binary)
{
queue.emplace_back(inputDir + '/' + filename);
}
for (unsigned int i = 0; i < std::thread::hardware_concurrency(); ++i)
{
threadPool.emplace_back(&MyExtractor::initThread,this,std::ref(queue),std::ref(mut));
}
}
else
{
message.setText("No binary files found! Please select another folder!");
message.exec();
}
for (auto &&e : threadPool)
{
e.join();
}
}
And initThread looks like this:
void MyExtractor::initThread(std::deque<QString> &queue, std::mutex &mutex)
{
QString file;
QString toOutput = outputDir;
while(queue.size() > 0)
{
{
std::lock_guard<std::mutex> lock(mutex);
if(!queue.empty())
{
file = queue.front();
queue.pop_front();
}
}
BitExtract *bitExtractor = new BitExtract();
if(file.size() != 0)
{
bitExtractor->extract(file,toOutput);
}
delete bitExtractor;
}
}
I have been reading about QThreads. And from what I think I have been reading, it seems I need to create a separate thread to watch the work, and the other thread to watch the GUI? I am not sure if I have worded that correctly. However, I am not even sure how to go about that since I am using a std::thread to do the conversion, and I am not sure how well QThread will play with this. Any suggestions?
EDIT: I should make it clear that threadPool is a std::vector<std::thread>
As noted by #drescherjm, your problem is here:
for (auto &&e : threadPool)
{
e.join();
}
join() won't return until the thread has completed, which means your GUI thread will be blocked inside that for-loop until all threads have exited, which is what you want to avoid. (it's always desirable for any function in the main/Qt/GUI thread to return as quickly as possible, so that Qt's GUI event loop can remain responsive)
Avoiding that is fairly straightforward -- instead of calling join() right after the threads have been spawned, you should only call join() on a thread after the thread has notified you that it has completed its work and is about to exit. That way join() will never take more than a few milliseconds to return.
As for how to get a std::thread to notify your main/GUI thread that it has finished its task, one simple way to do it is to have your std::thread call QApplication::postEvent() just before it exits, and override the event(QEvent *) virtual method on (whatever object you passed in as the first argument to postEvent()) to handle the posted event-object (note that you can make your own subclass of QEvent that contains whatever data you want to send to the GUI thread) by calling join() on the std::thread, plus whatever cleanup and result-handling code you need to execute after a thread has returned its result.
I need to work with several objects, where each operation may take a lot of time.
The processing could not be placed in a GUI (main) thread, where I start it.
I need to make all the communications with some objects on asynchronous operations, something similar to std::async with std::future or QtConcurrent::run() in my main framework (Qt 5), with QFuture, etc., but it doesn't provide thread selection. I need to work with a selected object (objects == devices) in only one additional thread always,
because:
I need to make a universal solution and don't want to make each class thread-safe
For example, even if make a thread-safe container for QSerialPort, Serial port in Qt cannot be accessed in more than one thread:
Note: The serial port is always opened with exclusive access (that is, no other process or thread can access an already opened serial port).
Usually a communication with a device consists of transmit a command and receive an answer. I want to process each Answer exactly in the place where Request was sent and don't want to use event-driven-only logic.
So, my question.
How can the function be implemented?
MyFuture<T> fut = myAsyncStart(func, &specificLiveThread);
It is necessary that one live thread can be passed many times.
Let me answer without referencing to Qt library since I don't know its threading API.
In C++11 standard library there is no straightforward way to reuse created thread. Thread executes single function and can be only joined or detachted. However, you can implement it with producer-consumer pattern. The consumer thread needs to execute tasks (represented as std::function objects for instance) which are placed in queue by producer thread. So if I am correct you need a single threaded thread pool.
I can recommend my C++14 implementation of thread pools as tasks queues. It isn't commonly used (yet!) but it is covered with unit tests and checked with thread sanitizer multiple times. The documentation is sparse but feel free to ask anything in github issues!
Library repository: https://github.com/Ravirael/concurrentpp
And your use case:
#include <task_queues.hpp>
int main() {
// The single threaded task queue object - creates one additional thread.
concurrent::n_threaded_fifo_task_queue queue(1);
// Add tasks to queue, task is executed in created thread.
std::future<int> future_result = queue.push_with_result([] { return 4; });
// Blocks until task is completed.
int result = future_result.get();
// Executes task on the same thread as before.
std::future<int> second_future_result = queue.push_with_result([] { return 4; });
}
If you want to follow the Active Object approach here is an example using templates:
The WorkPackage and it's interface are just for storing functions of different return type in a vector (see later in the ActiveObject::async member function):
class IWorkPackage {
public:
virtual void execute() = 0;
virtual ~IWorkPackage() {
}
};
template <typename R>
class WorkPackage : public IWorkPackage{
private:
std::packaged_task<R()> task;
public:
WorkPackage(std::packaged_task<R()> t) : task(std::move(t)) {
}
void execute() final {
task();
}
std::future<R> get_future() {
return task.get_future();
}
};
Here's the ActiveObject class which expects your devices as a template. Furthermore it has a vector to store the method requests of the device and a thread to execute those methods one after another. Finally the async function is used to request a method call from the device:
template <typename Device>
class ActiveObject {
private:
Device servant;
std::thread worker;
std::vector<std::unique_ptr<IWorkPackage>> work_queue;
std::atomic<bool> done;
std::mutex queue_mutex;
std::condition_variable cv;
void worker_thread() {
while(done.load() == false) {
std::unique_ptr<IWorkPackage> wp;
{
std::unique_lock<std::mutex> lck {queue_mutex};
cv.wait(lck, [this] {return !work_queue.empty() || done.load() == true;});
if(done.load() == true) continue;
wp = std::move(work_queue.back());
work_queue.pop_back();
}
if(wp) wp->execute();
}
}
public:
ActiveObject(): done(false) {
worker = std::thread {&ActiveObject::worker_thread, this};
}
~ActiveObject() {
{
std::unique_lock<std::mutex> lck{queue_mutex};
done.store(true);
}
cv.notify_one();
worker.join();
}
template<typename R, typename ...Args, typename ...Params>
std::future<R> async(R (Device::*function)(Params...), Args... args) {
std::unique_ptr<WorkPackage<R>> wp {new WorkPackage<R> {std::packaged_task<R()> { std::bind(function, &servant, args...) }}};
std::future<R> fut = wp->get_future();
{
std::unique_lock<std::mutex> lck{queue_mutex};
work_queue.push_back(std::move(wp));
}
cv.notify_one();
return fut;
}
// In case you want to call some functions directly on the device
Device* operator->() {
return &servant;
}
};
You can use it as follows:
ActiveObject<QSerialPort> ao_serial_port;
// direct call:
ao_serial_port->setReadBufferSize(size);
//async call:
std::future<void> buf_future = ao_serial_port.async(&QSerialPort::setReadBufferSize, size);
std::future<Parity> parity_future = ao_serial_port.async(&QSerialPort::parity);
// Maybe do some other work here
buf_future.get(); // wait until calculations are ready
Parity p = parity_future.get(); // blocks if result not ready yet, i.e. if method has not finished execution yet
EDIT to answer the question in the comments: The AO is mainly a concurrency pattern for multiple reader/writer. As always, its use depends on the situation. And so this pattern is commonly used in distributed systems/network applications, for example when multiple clients request a service from a server. The clients benefit from the AO pattern as they are not blocked, when waiting for the server to answer.
One reason why this pattern is not used so often in fields other then network apps might be the thread overhead. When creating a thread for every active object results in a lot of threads and thus thread contention if the number of CPUs is low and many active objects are used at once.
I can only guess why people think it is a strange issue: As you already found out it does require some additional programming. Maybe that's the reason but I'm not sure.
But I think the pattern is also very useful for other reasons and uses. As for your example, where the main thread (and also other background threads) require a service from singletons, for example some devices or hardware interfaces, which are only availabale in a low number, slow in their computations and require concurrent access, without being blocked waiting for a result.
It's Qt. It's signal-slot mechanism is thread-aware. On your secondary (non-GUI) thread, create a QObject-derived class with an execute slot. Signals connected to this slot will marshal the event to that thread.
Note that this QObject can't be a child of a GUI object, since children need to live in their parents thread, and this object explicitly does not live in the GUI thread.
You can handle the result using existing std::promise logic, just like std::future does.
I'm looking for a way to perform cross-thread operations the way SendMessage allows. In other words, how to have a thread execute some function in another thread. But I want to do it without SendMessage as that requires a window which is not always available.
Synchronously or asynchronously is fine.
.NET does it with the System.Windows.Threading.Dispatcher class, so surely there's a way?
So I'm guessing we're talking about Windows OS here, right? you should specify that in your question. A solution for Windows might be a different for a solution for Linux, for example.
now, regarding your question, any solution for that situation will force your thread(s) to either wait on some event to happen (enqueuing a task), or poll for a task endlessly.
So we're talking about either some kind of a mutex, a condition variable or some special sleeping function.
One (simple) and non portable way or sending "tasks" to other threads is to use the builtin Win32 - mechanism of APC (Asynchronous Procedure Call).
it utilizes the functions QueueUserAPC and SleepEx, I have mini tested this solution on my Windows 10 + Visual studio 2015
namespace details {
void waitforTask() noexcept {
SleepEx(INFINITE, TRUE);
}
void __stdcall executeTask(ULONG_PTR ptr) {
if (ptr == 0) {
return;
}
std::unique_ptr<std::function<void()>> scopedPtr(reinterpret_cast<std::function<void()>*>(ptr));
(*scopedPtr)();
}
}
template<class F, class ... Args>
void sendTask(void* threadHandle, F&& f, Args&& ... args) {
auto task =
std::make_unique<std::function<void()>>(std::bind(std::forward<F>(f), std::forward<Args>(args)...));
const auto res = QueueUserAPC(&details::executeTask,
threadHandle,
reinterpret_cast<ULONG_PTR>(task.get()));
if (res == 0) {
throw std::runtime_error("sendTask failed.");
}
task.release();
}
Example use:
std::thread thread([] {
for (;;) {
details::waitforTask();
}
});
sendTask(thread.native_handle(), [](auto literal) {
std::cout << literal;
}, "hello world");
this solution also shows how to use Win32 without actually contaminating the business logic written in C++ code non related win32 code.
this solution also can be adapted to a cross platform solution, instead of using an internal, semi-documented win32 task queue, one can build his own task queue using std::queue and std::function<void()>. instead of sleeping in alertable state, one can use std::condition_variable instead. this is what any thread-pool does behind the scenes in order to get and execute tasks. If you do want a cross-platform solution, I suggest googling "C++ thread pool" in order to see examples of such task queue.
I'm trying to implement Actor calculation model over threads on C++ using boost::thread.
But program throws weird exception during execution. Exception isn't stable and some times program works in correct way.
There my code:
actor.hpp
class Actor {
public:
typedef boost::function<int()> Job;
private:
std::queue<Job> d_jobQueue;
boost::mutex d_jobQueueMutex;
boost::condition_variable d_hasJob;
boost::atomic<bool> d_keepWorkerRunning;
boost::thread d_worker;
void workerThread();
public:
Actor();
virtual ~Actor();
void execJobAsync(const Job& job);
int execJobSync(const Job& job);
};
actor.cpp
namespace {
int executeJobSync(std::string *error,
boost::promise<int> *promise,
const Actor::Job *job)
{
int rc = (*job)();
promise->set_value(rc);
return 0;
}
}
void Actor::workerThread()
{
while (d_keepWorkerRunning) try {
Job job;
{
boost::unique_lock<boost::mutex> g(d_jobQueueMutex);
while (d_jobQueue.empty()) {
d_hasJob.wait(g);
}
job = d_jobQueue.front();
d_jobQueue.pop();
}
job();
}
catch (...) {
// Log error
}
}
void Actor::execJobAsync(const Job& job)
{
boost::mutex::scoped_lock g(d_jobQueueMutex);
d_jobQueue.push(job);
d_hasJob.notify_one();
}
int Actor::execJobSync(const Job& job)
{
std::string error;
boost::promise<int> promise;
boost::unique_future<int> future = promise.get_future();
{
boost::mutex::scoped_lock g(d_jobQueueMutex);
d_jobQueue.push(boost::bind(executeJobSync, &error, &promise, &job));
d_hasJob.notify_one();
}
int rc = future.get();
if (rc) {
ErrorUtil::setLastError(rc, error.c_str());
}
return rc;
}
Actor::Actor()
: d_keepWorkerRunning(true)
, d_worker(&Actor::workerThread, this)
{
}
Actor::~Actor()
{
d_keepWorkerRunning = false;
{
boost::mutex::scoped_lock g(d_jobQueueMutex);
d_hasJob.notify_one();
}
d_worker.join();
}
Actually exception that is thrown is boost::thread_interrupted in int rc = future.get(); line. But form boost docs I can't reason of this exception. Docs says
Throws: - boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.
But my worker thread can't be in interrupted state.
When I used gdb and set "catch throw" I see that back trace looks like
throw thread_interrupted
boost::detail::interruption_checker::check_for_interruption
boost::detail::interruption_checker::interruption_checker
boost::condition_variable::wait
boost::detail::future_object_base::wait_internal
boost::detail::future_object_base::wait
boost::detail::future_object::get
boost::unique_future::get
I looked into boost sources but can't get why interruption_checker decided that worker thread is interrupted.
So someone C++ guru, please help me. What I need to do to get correct code?
I'm using:
boost 1_53
Linux version 2.6.18-194.32.1.el5 Red Hat 4.1.2-48
gcc 4.7
EDIT
Fixed it! Thanks to Evgeny Panasyuk and Lazin. The problem was in TLS
management. boost::thread and boost::thread_specific_ptr are using
same TLS storage for their purposes. In my case there was problem when
they both tried to change this storage on creation (Unfortunately I
didn't get why in details it happens). So TLS became corrupted.
I replaced boost::thread_specific_ptr from my code with __thread
specified variable.
Offtop: During debugging I found memory corruption in external library
and fixed it =)
.
EDIT 2
I got the exact problem... It is a bug in GCC =)
The _GLIBCXX_DEBUG compilation flag breaks ABI.
You can see discussion on boost bugtracker:
https://svn.boost.org/trac/boost/ticket/7666
I have found several bugs:
Actor::workerThread function does double unlock on d_jobQueueMutex. First unlock is manual d_jobQueueMutex.unlock();, second is in destructor of boost::unique_lock<boost::mutex>.
You should prevent one of unlocking, for example release association between unique_lock and mutex:
g.release(); // <------------ PATCH
d_jobQueueMutex.unlock();
Or add additional code block + default-constructed Job.
It is possible that workerThread will never leave following loop:
while (d_jobQueue.empty()) {
d_hasJob.wait(g);
}
Imagine following case: d_jobQueue is empty, Actor::~Actor() is called, it sets flag and notifies worker thread:
d_keepWorkerRunning = false;
d_hasJob.notify_one();
workerThread wakes up in while loop, sees that queue is empty and sleeps again.
It is common practice to send special final job to stop worker thread:
~Actor()
{
execJobSync([this]()->int
{
d_keepWorkerRunning = false;
return 0;
});
d_worker.join();
}
In this case, d_keepWorkerRunning is not required to be atomic.
LIVE DEMO on Coliru
EDIT:
I have added event queue code into your example.
You have concurrent queue in both EventQueueImpl and Actor, but for different types. It is possible to extract common part into separate entity concurrent_queue<T> which works for any type. It would be much easier to debug and test queue in one place than catching bugs scattered over different classes.
So, you can try to use this concurrent_queue<T>(on Coliru)
This is just a guess. I think that some code can actually call boost::tread::interrupt(). You can set breakpoint to this function and see what code is responsible for this. You can test for interruption in execJobSync:
int Actor::execJobSync(const Job& job)
{
if (boost::this_thread::interruption_requested())
std::cout << "Interruption requested!" << std::endl;
std::string error;
boost::promise<int> promise;
boost::unique_future<int> future = promise.get_future();
The most suspicious code in this case is a code that has reference to thread object.
It is good practice to make your boost::thread code interruption aware anyway. It is also possible to disable interruption for some scope.
If this is not the case - you need to check code that works with thread local storage, because thread interruption flag stored in the TLS. Maybe some your code rewrites it. You can check interruption before and after such code fragment.
Another possibility is that your memory is corrupt. If no code is calling boost::thread::interrupt() and you doesn't work with TLS. This is the most hard case, try to use some dynamic analyzer - valgrind or clang memory sanitizer.
Offtopic:
You probably need to use some concurrent queue. std::queue will be very slow because of high memory contention and you will end up with poor cache performance. Good concurrent queue allow your code to enqueue and dequeue elements in parallel.
Also, actor is not something that supposed to execute arbitrary code. Actor queue must receive simple messages, not functions! Youre writing a job queue :) You need to take a look at some actor system like Akka or libcpa.
I am trying to create a thread in C++ (Win32) to run a simple method. I'm new to C++ threading, but very familiar with threading in C#. Here is some pseudo-code of what I am trying to do:
static void MyMethod(int data)
{
RunStuff(data);
}
void RunStuff(int data)
{
//long running operation here
}
I want to to call RunStuff from MyMethod without it blocking. What would be the simplest way of running RunStuff on a separate thread?
Edit: I should also mention that I want to keep dependencies to a minimum. (No MFC... etc)
#include <boost/thread.hpp>
static boost::thread runStuffThread;
static void MyMethod(int data)
{
runStuffThread = boost::thread(boost::bind(RunStuff, data));
}
// elsewhere...
runStuffThread.join(); //blocks
C++11 available with more recent compilers such as Visual Studio 2013 has threads as part of the language along with quite a few other nice bits and pieces such as lambdas.
The include file threads provides the thread class which is a set of templates. The thread functionality is in the std:: namespace. Some thread synchronization functions use std::this_thread as a namespace (see Why the std::this_thread namespace? for a bit of explanation).
The following console application example using Visual Studio 2013 demonstrates some of the thread functionality of C++11 including the use of a lambda (see What is a lambda expression in C++11?). Notice that the functions used for thread sleep, such as std::this_thread::sleep_for(), uses duration from std::chrono.
// threading.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <iostream>
#include <chrono>
#include <thread>
#include <mutex>
int funThread(const char *pName, const int nTimes, std::mutex *myMutex)
{
// loop the specified number of times each time waiting a second.
// we are using this mutex, which is shared by the threads to
// synchronize and allow only one thread at a time to to output.
for (int i = 0; i < nTimes; i++) {
myMutex->lock();
std::cout << "thread " << pName << " i = " << i << std::endl;
// delay this thread that is running for a second.
// the this_thread construct allows us access to several different
// functions such as sleep_for() and yield(). we do the sleep
// before doing the unlock() to demo how the lock/unlock works.
std::this_thread::sleep_for(std::chrono::seconds(1));
myMutex->unlock();
std::this_thread::yield();
}
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
// create a mutex which we are going to use to synchronize output
// between the two threads.
std::mutex myMutex;
// create and start two threads each with a different name and a
// different number of iterations. we provide the mutex we are using
// to synchronize the two threads.
std::thread myThread1(funThread, "one", 5, &myMutex);
std::thread myThread2(funThread, "two", 15, &myMutex);
// wait for our two threads to finish.
myThread1.join();
myThread2.join();
auto fun = [](int x) {for (int i = 0; i < x; i++) { std::cout << "lambda thread " << i << std::endl; std::this_thread::sleep_for(std::chrono::seconds(1)); } };
// create a thread from the lambda above requesting three iterations.
std::thread xThread(fun, 3);
xThread.join();
return 0;
}
CreateThread (Win32) and AfxBeginThread (MFC) are two ways to do it.
Either way, your MyMethod signature would need to change a bit.
Edit: as noted in the comments and by other respondents, CreateThread can be bad.
_beginthread and _beginthreadex are the C runtime library functions, and according to the docs are equivalent to System::Threading::Thread::Start
Consider using the Win32 thread pool instead of spinning up new threads for work items. Spinning up new threads is wasteful - each thread gets 1 MB of reserved address space for its stack by default, runs the system's thread startup code, causes notifications to be delivered to nearly every DLL in your process, and creates another kernel object. Thread pools enable you to reuse threads for background tasks quickly and efficiently, and will grow or shrink based on how many tasks you submit. In general, consider spinning up dedicated threads for never-ending background tasks and use the threadpool for everything else.
Before Vista, you can use QueueUserWorkItem. On Vista, the new thread pool API's are more reliable and offer a few more advanced options. Each will cause your background code to start running on some thread pool thread.
// Vista
VOID CALLBACK MyWorkerFunction(PTP_CALLBACK_INSTANCE instance, PVOID context);
// Returns true on success.
TrySubmitThreadpoolCallback(MyWorkerFunction, context, NULL);
// Pre-Vista
DWORD WINAPI MyWorkerFunction(PVOID context);
// Returns true on success
QueueUserWorkItem(MyWorkerFunction, context, WT_EXECUTEDEFAULT);
Simple threading in C++ is a contradiction in terms!
Check out boost threads for the closest thing to a simple approach available today.
For a minimal answer (which will not actually provide you with all the things you need for synchronization, but answers your question literally) see:
http://msdn.microsoft.com/en-us/library/kdzttdcb(VS.80).aspx
Also static means something different in C++.
Is this safe:
unsigned __stdcall myThread(void *ArgList) {
//Do stuff here
}
_beginthread(myThread, 0, &data);
Do I need to do anything to release the memory (like CloseHandle) after this call?
Another alternative is pthreads - they work on both windows and linux!
CreateThread (Win32) and AfxBeginThread (MFC) are two ways to do it.
Be careful to use _beginthread if you need to use the C run-time library (CRT) though.
For win32 only and without additional libraries you can use
CreateThread function
http://msdn.microsoft.com/en-us/library/ms682453(VS.85).aspx
If you really don't want to use third party libs (I would recommend boost::thread as explained in the other anwsers), you need to use the Win32API:
static void MyMethod(int data)
{
int data = 3;
HANDLE hThread = ::CreateThread(NULL,
0,
&RunStuff,
reinterpret_cast<LPVOID>(data),
0,
NULL);
// you can do whatever you want here
::WaitForSingleObject(hThread, INFINITE);
::CloseHandle(hThread);
}
static DWORD WINAPI RunStuff(LPVOID param)
{
int data = reinterpret_cast<int>(param);
//long running operation here
return 0;
}
There exists many open-source cross-platform C++ threading libraries you could use:
Among them are:
Qt
Intel
TBB Boost thread
The way you describe it, I think either Intel TBB or Boost thread will be fine.
Intel TBB example:
class RunStuff
{
public:
// TBB mandates that you supply () operator
void operator ()()
{
// long running operation here
}
};
// Here's sample code to instantiate it
#include <tbb/tbb_thread.h>
tbb::tbb_thread my_thread(RunStuff);
Boost thread example:
http://www.ddj.com/cpp/211600441
Qt example:
http://doc.trolltech.com/4.4/threads-waitconditions-waitconditions-cpp.html
(I dont think this suits your needs, but just included here for completeness; you have to inherit QThread, implement void run(), and call QThread::start()):
If you only program on Windows and dont care about crossplatform, perhaps you could use Windows thread directly:
http://www.codersource.net/win32_multithreading.html