The new C++ has this std::thread type. Works like a charm.
Now I would like to give each thread a name for more easy debugging (like java allows you to).
With pthreads I would do:
pthread_setname_np(pthread_self(), "thread_name");
but how can I do this with c++0x?
I know it uses pthreads underneath on Linux systems, but I would like to make my application portable. Is it possible at all?
A portable way to do this is to maintain a map of names, keyed by the thread's ID, obtained from thread::get_id(). Alternatively, as suggested in the comments, you could use a thread_local variable, if you only need to access the name from within the thread.
If you didn't need portability, then you could get the underlying pthread_t from thread::native_handle() and do whatever platform-specific shenanigans you like with that. Be aware that the _np on the thread naming functions means "not posix", so they aren't guaranteed to be available on all pthreads implementations.
An attempt at making a wrapper to deal with many Linuxes as well as Windows. Please edit as needed.
#ifdef _WIN32
#include <windows.h>
const DWORD MS_VC_EXCEPTION=0x406D1388;
#pragma pack(push,8)
typedef struct tagTHREADNAME_INFO
{
DWORD dwType; // Must be 0x1000.
LPCSTR szName; // Pointer to name (in user addr space).
DWORD dwThreadID; // Thread ID (-1=caller thread).
DWORD dwFlags; // Reserved for future use, must be zero.
} THREADNAME_INFO;
#pragma pack(pop)
void SetThreadName(uint32_t dwThreadID, const char* threadName)
{
// DWORD dwThreadID = ::GetThreadId( static_cast<HANDLE>( t.native_handle() ) );
THREADNAME_INFO info;
info.dwType = 0x1000;
info.szName = threadName;
info.dwThreadID = dwThreadID;
info.dwFlags = 0;
__try
{
RaiseException( MS_VC_EXCEPTION, 0, sizeof(info)/sizeof(ULONG_PTR), (ULONG_PTR*)&info );
}
__except(EXCEPTION_EXECUTE_HANDLER)
{
}
}
void SetThreadName( const char* threadName)
{
SetThreadName(GetCurrentThreadId(),threadName);
}
void SetThreadName( std::thread* thread, const char* threadName)
{
DWORD threadId = ::GetThreadId( static_cast<HANDLE>( thread->native_handle() ) );
SetThreadName(threadId,threadName);
}
#elif defined(__linux__)
#include <sys/prctl.h>
void SetThreadName( const char* threadName)
{
prctl(PR_SET_NAME,threadName,0,0,0);
}
#else
void SetThreadName(std::thread* thread, const char* threadName)
{
auto handle = thread->native_handle();
pthread_setname_np(handle,threadName);
}
#endif
You can use std::thread::native_handle to get the underlaying implementation defined thread. There is no standard function for that natively.
You can find an example here.
For windows [debugger], you can easily use the "normal" method;
http://msdn.microsoft.com/en-gb/library/xcb2z8hs.aspx
Just need the thread id which you can obtain via
#include <windows.h>
DWORD ThreadId = ::GetThreadId( static_cast<HANDLE>( mThread.native_handle() ) );
I've both seen this done in a system predating c++11 (where we basically invented our own Thread class that was very similar to std::thread) and in one I wrote fairly recently.
Basically, the pool really puts std::thread 2 layers down- you have a PoolThread class that contains a std::thread plus metadata like its name, ID, etc. and the the control structure that links it to its controlling pool, and the ThreadPool itself. You want to use thread pools in most threaded code for several reasons:
1) You can hide all the explicit "detach", "join", start thread on std::thread construction, etc. from users. That produces MUCH safer & cleaner code.
2) Better resource management: Too many threads will cripple performance even more than having too few. A well-built pool can do advanced things like automatic load balancing and cleaning-up hung or deadlocked threads.
3) Thread reuse: std::thread by itself is easiest to use by running every parallel task on its own thread. But thread creation & destruction are expensive, and can easily swamp the speed boost from parallel processing if you're not careful. So, it usually makes more sense to have pool threads that pull work tasks from a queue and only exit after receiving some signal.
4) Error handling: std::thread is just an execution context. If the task you're running on it throws an un-handled exception or std::thread ITSELF fails, the process will just crash right there. To do fault-tolerant multithreading, you need a Pool or something similar that can quickly catch such things and at least emit meaningful error messages before the process dies.
In header file do:
const std::string & ThreadName(const std::string name="");
In src file do:
const std::string & ThreadName(const std::string name)
{
static std::atomic_int threadCount{0};
const thread_local std::string _name = name + std::to_string(threadCount.fetch_add(1));
return _name;
}
Usage:
void myThread()
{
ThreadName("myThread"); // Call once at very beginning of your thread creation
...
std::cout << ThreadName() << std::endl; // Anyplace in your code
}
Related
I have the following code:
unsigned long ClassName::fn_StartTesterPresentThread()
{
// m_hTesterPresent is a HANDLE
if (m_hTesterPresent == NULL)
{
DWORD dwThreadId = 0;
m_hTesterPresent = CreateThread(NULL, 0, th_TesterPresent, this, 0, &dwThreadId);
}
return ACTION_SUCCESS;
}
The CreateThread function is Windows-specific, but the code needs to be ported to Linux/non-platform-specific.
I now need a way to create a boost::thread and pass it into the HANDLE m_hTesterPresent.
I was thinking about something like this:
unsigned long Class::fn_StartTesterPresentThread()
{
if (m_hTesterPresent == NULL)
{
DWORD dwThreadId = 0;
boost::thread threadTesterPresent(&m_hTesterPresent);
threadTesterPresent.join();
}
return ACTION_SUCCESS;
}
Is this the correct way? Sadly, I can't yet compile or test because I still have other parts of the project to port.
Furthermore, the project is a DLL and I don't have the client yet that calls the functions in the DLL.
Any help appreciated!
The argument to boost::thread should be a function to call, here th_TesterPresent.
If you are porting this code to non-Windows environments, I suggest you use typedefs to distinguish between the different kinds of HANDLEs.
Then you can have:
#ifndef WINDOWS
typedef boost::thread* THREAD_HANDLE;
typedef boost::mutex MUTEX_HANDLE;
#else
typedef HANDLE THREAD_HANDLE;
typedef HANDLE MUTEX_HANDLE;
#endif
and store THREAD_HANDLE / MUTEX_HANDLE values in various place. Instead of calling OS-specific functions, you can then call generic "createThreadHandle" functions that dispatch to different functions depending on the OS.
Alternatively, you could port your code to an OS-agnostic framework like boost and solve two problems in one go.
I like to check if a thread is doing work. If the thread is doing work I will wait for an event until the thread has stopped its work. The event the thread will set at the end.
To check if the thread is working I declared a volatile bool variable. The bool variable will be true if the thread is running, else it is false. At the end of the thread the bool variable will be set to false.
Is it adequate to use a volatile bool variable or do I have to use an atomic function?
BTW: Can please someone explain me the InterlockedExchange Method, I donĀ“t understand the use case I will need this function.
Update
I see without my code it is not clear to say if a volatile bool variable will adequate. I wrote a testclass which shows my problem.
class Testclass
{
public:
Testclass(void);
~Testclass(void);
void doThreadedWork();
void Work();
void StartWork();
void WaitUntilFinish();
private:
HANDLE hHasWork;
HANDLE hAbort;
HANDLE hFinished;
volatile bool m_bWorking;
};
//.cpp
#include "stdafx.h"
#include "Testclass.h"
CRITICAL_SECTION cs;
DWORD WINAPI myThread(LPVOID lpParameter)
{
Testclass* pTestclass = (Testclass*) lpParameter;
pTestclass->doThreadedWork();
return 0;
}
Testclass::Testclass(void)
{
InitializeCriticalSection(&cs);
DWORD myThreadID;
HANDLE myHandle = CreateThread(0, 0, myThread, this, 0, &myThreadID);
m_bWorking = false;
hHasWork = CreateEvent(NULL,TRUE,FALSE,NULL);
hAbort = CreateEvent(NULL,TRUE,FALSE,NULL);
hFinished = CreateEvent(NULL,FALSE,FALSE,NULL);
}
Testclass::~Testclass(void)
{
DeleteCriticalSection(&cs);
CloseHandle(hHasWork);
CloseHandle(hAbort);
CloseHandle(hFinished);
}
void Testclass::Work()
{
// do some work
m_bWorking = false;
SetEvent(hFinished);
}
void Testclass::StartWork()
{
EnterCriticalSection(&cs);
m_bWorking = true;
ResetEvent(hFinished);
SetEvent(hHasWork);
LeaveCriticalSection(&cs);
}
void Testclass::doThreadedWork()
{
HANDLE hEvents[2];
hEvents[0] = hHasWork;
hEvents[1] = hAbort;
while(true)
{
DWORD dwEvent = WaitForMultipleObjects(2, hEvents, FALSE, INFINITE);
if(WAIT_OBJECT_0 == dwEvent)
{
Work();
}
else
{
break;
}
}
}
void Testclass::WaitUntilFinish()
{
EnterCriticalSection(&cs);
if(!m_bWorking)
{
// if the thread is not working, do not wait and return
LeaveCriticalSection(&cs);
return;
}
WaitForSingleObject(hFinished,INFINITE);
LeaveCriticalSection(&cs);
}
For me it is not realy clear if m_bWorking value n a atomic way or if the volatile cast will adequate.
There is a lot of background to cover for your question. We don't know for example what tool chain you are using so I am going to answer it as a winapi question. I further assume you have some something in mind like this:
volatile bool flag = false;
DWORD WINAPI WorkFn(void*) {
flag = true;
// work here
....
// done.
flag = false;
return 0;
}
int main() {
HANDLE th = CreateThread(...., &WorkFn, NULL, ..);
// wait for start of work.
while (!flag) {
// ?? # 1
}
// Seems thread is busy now. Time to wait for it to finish.
while (flag) {
// ?? # 2
}
}
There are many things wrong here. For starters the volatile does very little here. When flag = true happens it will eventually be visible to the other thread because it is backed by a global variable. This is so because it will at least make it into the cache and the cache has ways to tell other processors that a given line (which is a range of addresses) is dirty. The only way it would not make it into the cache is that if the compiler makes a super crazy optimization in which flag stays in the cpu as a register. That could actually happen but not in this particular code example.
So volatile tells the compiler to never keep the variable as a register. That is what it is, every time you see a volatile variable you can translate it as "never enregister this variable". Its use here is just basically a paranoid move.
If this code is what you had in mind then this looping over a flag pattern is called a Spinlock and this one is a really poor one. It is almost never the right thing to do in a user mode program.
Before we go into better approaches let me tackle your Interlocked question. What people usually mean is this pattern
volatile long flag = 0;
DWORD WINAPI WorkFn(void*) {
InterlockedExchange(&flag, 1);
....
}
int main() {
...
while (InterlockedCompareExchange(&flag, 1, 1) = 0L) {
YieldProcessor();
}
...
}
Assume the ... means similar code as before. What the InterlockedExchange() is doing is forcing the write to memory to happen in a deterministic, "broadcast the change now", kind of way and the typical way to read it in the same "bypass the cache" way is via InterlockedCompareExchange().
One problem with them is that they generate more traffic on the system bus. That is, the bus now being used to broadcast cache synchronization packets among the cpus on the system.
std::atomic<bool> flag would be the modern, C++11 way to do the same, but still not what you really want to do.
I added the YieldProcessor() call there to point to the real problem. When you wait for a memory address to change you are using cpu resources that would be better used somewhere else, for example in the actual work (!!). If you actually yield the processor there is at least a chance that the OS will give it to the WorkFn, but in a multicore machine it will quickly go back to polling the variable. In a modern machine you will be checking this flag millions of times per second, with the yield, probably 200000 times per second. Terrible waste either way.
What you want to do here is to leverage Windows to do a zero-cost wait, or at least a low cost as you want to:
DWORD WINAPI WorkFn(void*) {
// work here
....
return 0;
}
int main() {
HANDLE th = CreateThread(...., &WorkFn, NULL, ..);
WaitForSingleObject(th, INFINITE);
// work is done!
CloseHandle(th);
}
When you return from the worker thread the thread handle get signaled and the wait it satisfied. While stuck in WaitForSingleObject you don't consume any cpu cycles. If you want to do a periodic activity in the main() function while you wait you can replace INFINITE with 1000, which will release the main thread every second. In that case you need to check the return value of WaitForSingleObject to tell the timeout from thread being done case.
If you need to actually know when work started, you need an additional waitable object, for example, a Windows event which is obtained via CreateEvent() and can be waited on using the same WaitForSingleObject.
Update [1/23/2016]
Now that we can see the code you have in mind, you don't need atomics, volatile works just fine. The m_bWorking is protected by the cs mutex anyhow for the true case.
If I might suggest, you can use TryEnterCriticalSection and cs to accomplish the same without m_bWorking at all:
void Testclass::Work()
{
EnterCriticalSection(&cs);
// do some work
LeaveCriticalSection(&cs);
SetEvent(hFinished); // could be removed as well
}
void Testclass::StartWork()
{
ResetEvent(hFinished); // could be removed.
SetEvent(hHasWork);
}
void Testclass::WaitUntilFinish()
{
if (TryEnterCriticalSection(&cs)) {
// Not busy now.
LeaveCriticalSection(&cs);
return;
} else {
// busy doing work. If we use EnterCriticalSection(&cs)
// here we can even eliminate hFinished from the code.
}
...
}
For some reason, the Interlocked API does not include an "InterlockedGet" or "InterlockedSet" function. This is a strange omission and the typical work around is to cast through volatile.
You can use code like the following on Windows:
#include <intrin.h>
__inline int InterlockedIncrement(int *j)
{ // This is VS-specific
return _InterlockedIncrement((volatile LONG *) j);
}
__inline int InterlockedDecrement(int *j)
{ // This is VS-specific
return _InterlockedDecrement((volatile LONG *) j);
}
__inline static void InterlockedSet(int *val, int newval)
{
*((volatile int *)val) = newval;
}
__inline static int InterlockedGet(int *val)
{
return *((volatile int *)val);
}
Yes, it's ugly. But it's the best way to work around the deficiency if you're not using C++11. If you're using C++11, use std::atomic instead.
Note that this is Windows-specific code and should not be used on other platforms.
No, volatile bool will not be enough. You need an atomic bool, as you correctly suspect. Otherwise, you might never see your bool updated.
There is also no InterlockedExchange in C++ (the tags of your question), but there are compare_exchange_weak and compare_exchange_strong functions in C++11. Those are used to set the value of an object to a certain NewValue, provided it's current value is TestValue and indicate the status of this attempt (was the change made or not). The benefit of those functions is that this is done in such a fasion that you are guaranteed that if two threads are trying to perform this operation, only one will succeed. This is very helpful when you need to take a certain actions depending on the result of the operation.
I have a MFC class with threads launched and the threads need to modify CString members of the main class.
I hate mutex locks, so there must be a an easier way to do this.
I am thinking to use the boost.org library or atl::atomic or shared_ptr variables.
What is the best method of reading and writting the string and be thread safe?
class MyClass
{
public:
void MyClass();
static UINT MyThread(LPVOID pArg);
CString m_strInfo;
};
void MyClass::MyClass()
{
AfxBeginThread(MyThread, this);
CString strTmp=m_strInfo; // this may cause crash
}
UINT MyClass::MyThread(LPVOID pArg)
{
MyClass pClass=(MyClass*)pArd;
pClass->m_strInfo=_T("New Value"); // non thread-safe change
}
According to MSDN shared_ptr works automatically https://msdn.microsoft.com/en-us/library/bb982026.aspx
So is this a better method?
#include <memory>
class MyClass
{
public:
void MyClass();
static UINT MyThread(LPVOID pArg);
std::shared_ptr<CString> m_strInfo; // ********
};
void MyClass::MyClass()
{
AfxBeginThread(MyThread, this);
CString strTmp=m_strInfo; // this may cause crash
}
UINT MyClass::MyThread(LPVOID pArg)
{
MyClass pClass=(MyClass*)pArd;
shared_ptr<CString> newValue(new CString());
newValue->SetString(_T("New Value"));
pClass->m_strInfo=newValue; // thread-safe change?
}
You could implement some kind of lockless way to achieve that, but it depends on how you use MyClass and your thread. If your thread is processing some data and after processing it, it need to update MyClass, then consider putting your string data in some other class ex.:
struct StringData {
CString m_strInfo;
};
then inside your MyClass:
class MyClass
{
public:
void MyClass();
static UINT MyThread(LPVOID pArg);
StringData* m_pstrData;
StringData* m_pstrDataForThreads;
};
now, the idea is that in your ie. main thread code you use m_pstrData, but you need to use atomics to store local pointer to it ie.:
void MyClass::MyClass()
{
AfxBeginThread(MyThread, this);
StringData* m_pstrDataTemp = ATOMIC_READ(m_pstrData);
if ( m_pstrDataTemp )
CString strTmp=m_pstrDataTemp->m_strInfo; // this may NOT cause crash
}
once your thread finished processing data, and wants to update string, you will atomically assign m_pstrDataForThreads to m_pstrData, and allocate new m_pstrDataForThreads,
The problem is with how to safely delete m_pstrData, I suppose you could use here std::shared_ptr.
In the end it looks kind of complicated and IMO not really worth the effort, at least it is hard to tell if this is really thread safe, and when code will get more complicated - it will still be thread safe. Also this is for single worker thread case, and You say you have multiple threads. Thats why critical section is a starting point, and if it is too slow then think of using lockless approach.
btw. depending on how often you string data is updated you could also think about using PostMessage to safely pass a pointer to new string, to your main thread.
[edit]
ATOMIC_MACRO does not exists, its just a place holder to make it compile use ie. c++11 atomics, example below:
#include <atomic>
...
std::atomic<uint64_t> sharedValue(0);
sharedValue.store(123, std::memory_order_relaxed); // atomically store
uint64_t ret = sharedValue.load(std::memory_order_relaxed); // atomically read
std::cout << ret;
I would have used simpler approach by protecting the variable with a SetStrInfo:
void SetStrInfo(const CString& str)
{
[Lock-here]
m_strInfo = str;
[Unlock-here]
}
For locking and unlocking we may use CCriticalSection (member of class), or wrap it around CSingleLock RAII. We may also use slim-reader writer locks for performance reasons (wrap with RAII - write a simple class). We may also use newer C++ techniques for RAII locking/unlocking.
Call me old-school, but for me std namespace has complicated set of options - doesn't suit everything, and everyone.
I recently learnt that ::_beginthreadex() is always preferable to ::CreateThread(), so I changed all my calls that used ::CreateThread().
The only downside is that I no longer see the thread function's name in Visual Studio's Threads window making it hard to quickly identify threads. I assume this was somehow done automatically by the debugger when I used ::CreateThread(), since the parameters are exactly the same, I just changed the name of the function used.
Is there any way to keep using ::_beginthreadex() and to see the thread's name in the Threads window of Visual Studio?
This happens because _beginthreadex() calls CreateThread() with its own thread function that calls the one you specify (so the debugger uses the _threadstartex function name - the one that _beginthreadex() invokes).
You can manually set the thread name yourself using the SetThreadName() example from MSDN. What you might want to do is create your own wrapper for _beginthreadex() that maybe looks something like:
uintptr_t startthreadex(
void* security,
unsigned stacksize,
unsigned (__stdcall * threadproc) (void *),
void* args,
unsigned flags,
unsigned * pThread_id,
char* thread_name)
{
unsigned alt_thread_id;
if (!pThread_id) {
pThread_id = & alt_thread_id;
}
uintptr_t result = _beginthreadex(security, stacksize, threadproc, args, flgas, pThread_id);
if (result == 0) return result;
SetThreadName( *pThread_id, thread_name);
}
Now you can call startthreadex() instead of _beginthreadex() and pass it a thread name. A small advantage to this is that if you use the same function to run several threads, you can easily give them each unique names that reflect the parameters passed ot the thread or whatever.
If you want the thread name to automatically take be the thread proc function name as the debugger's thread name, you could make a wrapper macro that stringizes the function name parameter (all it takes is another level of indirection or to to solve any problem...).
Here's SetThreadName() (it's from http://msdn.microsoft.com/en-us/library/xcb2z8hs.aspx):
//
// Usage: SetThreadName (-1, "MainThread");
//
#include <windows.h>
const DWORD MS_VC_EXCEPTION=0x406D1388;
#pragma pack(push,8)
typedef struct tagTHREADNAME_INFO
{
DWORD dwType; // Must be 0x1000.
LPCSTR szName; // Pointer to name (in user addr space).
DWORD dwThreadID; // Thread ID (-1=caller thread).
DWORD dwFlags; // Reserved for future use, must be zero.
} THREADNAME_INFO;
#pragma pack(pop)
void SetThreadName( DWORD dwThreadID, char* threadName)
{
THREADNAME_INFO info;
info.dwType = 0x1000;
info.szName = threadName;
info.dwThreadID = dwThreadID;
info.dwFlags = 0;
__try
{
RaiseException( MS_VC_EXCEPTION, 0, sizeof(info)/sizeof(ULONG_PTR), (ULONG_PTR*)&info );
}
__except(EXCEPTION_EXECUTE_HANDLER)
{
}
}
There is no particular advantage with using _beginthreadex over CreateThread. The CRT functions would eventually call CreateThread only.
You should read:
Windows threading: _beginthread vs _beginthreadex vs CreateThread C++
http://www.codeguru.com/forum/showthread.php?t=371305
http://social.msdn.microsoft.com/forums/en-US/vclanguage/thread/c727ae29-5a7a-42b6-ad0b-f6b21c1180b2
I am trying to create a thread in C++ (Win32) to run a simple method. I'm new to C++ threading, but very familiar with threading in C#. Here is some pseudo-code of what I am trying to do:
static void MyMethod(int data)
{
RunStuff(data);
}
void RunStuff(int data)
{
//long running operation here
}
I want to to call RunStuff from MyMethod without it blocking. What would be the simplest way of running RunStuff on a separate thread?
Edit: I should also mention that I want to keep dependencies to a minimum. (No MFC... etc)
#include <boost/thread.hpp>
static boost::thread runStuffThread;
static void MyMethod(int data)
{
runStuffThread = boost::thread(boost::bind(RunStuff, data));
}
// elsewhere...
runStuffThread.join(); //blocks
C++11 available with more recent compilers such as Visual Studio 2013 has threads as part of the language along with quite a few other nice bits and pieces such as lambdas.
The include file threads provides the thread class which is a set of templates. The thread functionality is in the std:: namespace. Some thread synchronization functions use std::this_thread as a namespace (see Why the std::this_thread namespace? for a bit of explanation).
The following console application example using Visual Studio 2013 demonstrates some of the thread functionality of C++11 including the use of a lambda (see What is a lambda expression in C++11?). Notice that the functions used for thread sleep, such as std::this_thread::sleep_for(), uses duration from std::chrono.
// threading.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <iostream>
#include <chrono>
#include <thread>
#include <mutex>
int funThread(const char *pName, const int nTimes, std::mutex *myMutex)
{
// loop the specified number of times each time waiting a second.
// we are using this mutex, which is shared by the threads to
// synchronize and allow only one thread at a time to to output.
for (int i = 0; i < nTimes; i++) {
myMutex->lock();
std::cout << "thread " << pName << " i = " << i << std::endl;
// delay this thread that is running for a second.
// the this_thread construct allows us access to several different
// functions such as sleep_for() and yield(). we do the sleep
// before doing the unlock() to demo how the lock/unlock works.
std::this_thread::sleep_for(std::chrono::seconds(1));
myMutex->unlock();
std::this_thread::yield();
}
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
// create a mutex which we are going to use to synchronize output
// between the two threads.
std::mutex myMutex;
// create and start two threads each with a different name and a
// different number of iterations. we provide the mutex we are using
// to synchronize the two threads.
std::thread myThread1(funThread, "one", 5, &myMutex);
std::thread myThread2(funThread, "two", 15, &myMutex);
// wait for our two threads to finish.
myThread1.join();
myThread2.join();
auto fun = [](int x) {for (int i = 0; i < x; i++) { std::cout << "lambda thread " << i << std::endl; std::this_thread::sleep_for(std::chrono::seconds(1)); } };
// create a thread from the lambda above requesting three iterations.
std::thread xThread(fun, 3);
xThread.join();
return 0;
}
CreateThread (Win32) and AfxBeginThread (MFC) are two ways to do it.
Either way, your MyMethod signature would need to change a bit.
Edit: as noted in the comments and by other respondents, CreateThread can be bad.
_beginthread and _beginthreadex are the C runtime library functions, and according to the docs are equivalent to System::Threading::Thread::Start
Consider using the Win32 thread pool instead of spinning up new threads for work items. Spinning up new threads is wasteful - each thread gets 1 MB of reserved address space for its stack by default, runs the system's thread startup code, causes notifications to be delivered to nearly every DLL in your process, and creates another kernel object. Thread pools enable you to reuse threads for background tasks quickly and efficiently, and will grow or shrink based on how many tasks you submit. In general, consider spinning up dedicated threads for never-ending background tasks and use the threadpool for everything else.
Before Vista, you can use QueueUserWorkItem. On Vista, the new thread pool API's are more reliable and offer a few more advanced options. Each will cause your background code to start running on some thread pool thread.
// Vista
VOID CALLBACK MyWorkerFunction(PTP_CALLBACK_INSTANCE instance, PVOID context);
// Returns true on success.
TrySubmitThreadpoolCallback(MyWorkerFunction, context, NULL);
// Pre-Vista
DWORD WINAPI MyWorkerFunction(PVOID context);
// Returns true on success
QueueUserWorkItem(MyWorkerFunction, context, WT_EXECUTEDEFAULT);
Simple threading in C++ is a contradiction in terms!
Check out boost threads for the closest thing to a simple approach available today.
For a minimal answer (which will not actually provide you with all the things you need for synchronization, but answers your question literally) see:
http://msdn.microsoft.com/en-us/library/kdzttdcb(VS.80).aspx
Also static means something different in C++.
Is this safe:
unsigned __stdcall myThread(void *ArgList) {
//Do stuff here
}
_beginthread(myThread, 0, &data);
Do I need to do anything to release the memory (like CloseHandle) after this call?
Another alternative is pthreads - they work on both windows and linux!
CreateThread (Win32) and AfxBeginThread (MFC) are two ways to do it.
Be careful to use _beginthread if you need to use the C run-time library (CRT) though.
For win32 only and without additional libraries you can use
CreateThread function
http://msdn.microsoft.com/en-us/library/ms682453(VS.85).aspx
If you really don't want to use third party libs (I would recommend boost::thread as explained in the other anwsers), you need to use the Win32API:
static void MyMethod(int data)
{
int data = 3;
HANDLE hThread = ::CreateThread(NULL,
0,
&RunStuff,
reinterpret_cast<LPVOID>(data),
0,
NULL);
// you can do whatever you want here
::WaitForSingleObject(hThread, INFINITE);
::CloseHandle(hThread);
}
static DWORD WINAPI RunStuff(LPVOID param)
{
int data = reinterpret_cast<int>(param);
//long running operation here
return 0;
}
There exists many open-source cross-platform C++ threading libraries you could use:
Among them are:
Qt
Intel
TBB Boost thread
The way you describe it, I think either Intel TBB or Boost thread will be fine.
Intel TBB example:
class RunStuff
{
public:
// TBB mandates that you supply () operator
void operator ()()
{
// long running operation here
}
};
// Here's sample code to instantiate it
#include <tbb/tbb_thread.h>
tbb::tbb_thread my_thread(RunStuff);
Boost thread example:
http://www.ddj.com/cpp/211600441
Qt example:
http://doc.trolltech.com/4.4/threads-waitconditions-waitconditions-cpp.html
(I dont think this suits your needs, but just included here for completeness; you have to inherit QThread, implement void run(), and call QThread::start()):
If you only program on Windows and dont care about crossplatform, perhaps you could use Windows thread directly:
http://www.codersource.net/win32_multithreading.html