I want to pass some data around threads but want to refrain from using global variables if I can manage it. The way I wrote my thread routine has the user passing in a separate function for each "phase" of a thread's life cycle: For instance this would be a typical usage of spawning a thread:
void init_thread(void *arg) {
graphics_init();
}
void process_msg_thread(message *msg, void *arg) {
if (msg->ID == MESSAGE_DRAW) {
graphics_draw();
}
}
void cleanup_thread(void *arg) {
graphics_cleanup();
}
int main () {
threadCreator factory;
factory.createThread(init_thread, 0, process_msg_thread, 0, cleanup_thread, 0);
// even indexed arguments are the args to be passed into their respective functions
// this is why each of those functions must have a fixed function signature is so they can be passed in this way to the factory
}
// Behind the scenes: in the newly spawned thread, the first argument given to
// createThread() is called, then a message pumping loop which will call the third
// argument is entered. Upon receiving a special exit message via another function
// of threadCreator, the fifth argument is called.
The most straightforward way to do it is using globals. I'd like to avoid doing that though because it is bad programming practice because it generates clutter.
A certain problem arises when I try to refine my example slightly:
void init_thread(void *arg) {
GLuint tex_handle[50]; // suppose I've got 50 textures to deal with.
graphics_init(&tex_handle); // fill up the array with them during graphics init which loads my textures
}
void process_msg_thread(message *msg, void *arg) {
if (msg->ID == MESSAGE_DRAW) { // this message indicates which texture my thread was told to draw
graphics_draw_this_texture(tex_handle[msg->texturehandleindex]); // send back the handle so it knows what to draw
}
}
void cleanup_thread(void *arg) {
graphics_cleanup();
}
I am greatly simplifying the interaction with the graphics system here but you get the point. In this example code tex_handle is an automatic variable, and all its values are lost when init_thread completes, so will not be available when process_msg_thread needs to reference it.
I can fix this by using globals but that means I can't have (for instance) two of these threads simultaneously since they would trample on each other's texture handle list since they use the same one.
I can use thread-local globals but is that a good idea?
I came up with one last idea. I can allocate storage on the heap in my parent thread, and send a pointer to in to the children to mess with. So I can just free it when parent thread leaves away since I intend for it to clean up its children threads before it exits anyway. So, something like this:
void init_thread(void *arg) {
GLuint *tex_handle = (GLuint*)arg; // my storage space passed as arg
graphics_init(tex_handle);
}
void process_msg_thread(message *msg, void *arg) {
GLuint *tex_handle = (GLuint*)arg; // same thing here
if (msg->ID == MESSAGE_DRAW) {
graphics_draw_this_texture(tex_handle[msg->texturehandleindex]);
}
}
int main () {
threadCreator factory;
GLuint *tex_handle = new GLuint[50];
factory.createThread(init_thread, tex_handle, process_msg_thread, tex_handle, cleanup_thread, 0);
// do stuff, wait etc
...
delete[] tex_handle;
}
This looks more or less safe because my values go on the heap, my main thread allocates it then lets children mess with it as they wish. The children can use the storage freely since the pointer was given to all the functions that need access.
So this got me thinking why not just have it be an automatic variable:
int main () {
threadCreator factory;
GLuint tex_handle[50];
factory.createThread(init_thread, &tex_handle, process_msg_thread, &tex_handle, cleanup_thread, 0);
// do stuff, wait etc
...
} // tex_handle automatically cleaned up at this point
This means children thread directly access parent's stack. I wonder if this is kosher.
I found this on the internets: http://software.intel.com/sites/products/documentation/hpc/inspectorxe/en-us/win/ug_docs/olh/common/Problem_Type__Potential_Privacy_Infringement.htm
it seems Intel Inspector XE detects this behavior. So maybe I shouldn't do it? Is it just simply a warning of potential privacy infringement as suggested by the the URL or are there other potential issues that may arise that I am not aware of?
P.S. After thinking through all this I realize that maybe this architecture of splitting a thread into a bunch of functions that get called independently wasn't such a great idea. My intention was to remove the complexity of requiring coding up a message handling loop for each thread that gets spawned. I had anticipated possible problems, and if I had a generalized thread implementation that always checked for messages (like my custom one that specifies the thread is to be terminated) then I could guarantee that some future user could not accidentally forget to check for that condition in each and every message loop of theirs.
The problem with my solution to that is that those individual functions are now separate and cannot communicate with each other. They may do so only via globals and thread local globals. I guess thread local globals may be my best option.
P.P.S. This got me thinking about RAII and how the concept of the thread at least as I have ended up representing it has a certain similarity with that of a resource. Maybe I could build an object that represents a thread more naturally than traditional ways... somehow. I think I will go sleep on it.
Put your thread functions into a class. Then they can communicate using instance variables. This requires your thread factory to be changed, but is the cleanest way to solve your problem.
Your idea of using automatic variables will work too as long as you can guarantee that the function whose stack frame contains the data will never return before your child threads exit. This is not really easy to achieve, even after main() returns child threads can still run.
Related
I'm trying to use multiple threads to make one function run concurrently with another, but when the function that the new thread is running uses a static function, it always returns 0 for some reason.
I'm using Boost for the threading, on Linux, and the static functions work exactly as expected when not using threads. I'm pretty sure this isn't a data race issue because if I join the thread directly after making it (not giving any other code a chance to change anything), the problem persists.
The function that the thread is created in:
void WorldIOManager::createWorld(unsigned int seed, std::string worldName, bool isFlat) {
boost::thread t( [=]() { P_createWorld(seed, worldName, isFlat); } );
t.join();
//P_createWorld(seed, worldName, isFlat); // This works perfectly fine
}
The part of P_createWorld that uses a static function (The function that the newly-created thread actually runs):
m_world->chunks[i]->tiles[y][x] = createBlock(chunkData[i].tiles[y][x].id, chunkData[i].tiles[y][x].pos, m_world->chunks[i]);
m_world is a struct that contains an array of Chunks, which has a two dimensional array of Tiles, which each have texture IDs associated with a texture in a cache. createBlock returns a pointer to a new tile pointer, completely initialized. The static function in question belongs to a statically-linked library, and is defined as follows:
namespace GLEngine {
//This is a way for us to access all our resources, such as
//Models or textures.
class ResourceManager
{
public:
static GLTexture getTexture(std::string texturePath);
private:
static TextureCache _textureCache;
};
}
Also, its implementation:
#include "ResourceManager.h"
namespace GLEngine {
TextureCache ResourceManager::_textureCache;
GLTexture ResourceManager::getTexture(std::string texturePath) {
return _textureCache.getTexture(texturePath);
}
}
Expected result: For each tile to actually get assigned its proper texture ID
Actual result: Every tile, no matter the texturePath, is assigned 0 as its texture ID.
If you need any more code like the constructor for a tile or createBlock(), I'll happily add it, I just don't really know what information is relevant in this kind of situation...
So, as I stated before, all of this works perfectly if I don't have a thread, so my final question is: Is there some sort of undefined behaviour that has to do with static functions being called by threads, or am I just doing something wrong here?
As #fifoforlifo mentioned, OpenGL contexts have thread affinity, and it turns out I was making GL calls deeper into my texture loading function. I created a second GL context and turned on context sharing and then it began to work. Thanks a lot, #fifoforlifo!
Some of functions in my program needs to run a long time so that the user may interrupted it. The structure is like this:
int MainWindow::someFunc1()
{
//VP is a class defined somewhere.
VP vp1;
//the for loop that needs time to execute.
return 0;
}
int MainWindow::someFunc2()
{
VP vp2;
//another loop that consumes time.
return 0;
}
If the user run the either of functions or at the same time and click exit on the right top, the program will still run in background until the loop is finished. I tried to free the resources in void closeEvent(QCloseEvent *) :
void MainWindow::closeEvent(QCloseEvent *)
{
vp.stopIt();
}
However since vp1 and vp2 are local variables, I don't know how to pass them into the closeEvent() function and free resources. Any suggestions will be appreciated.
Since the variables are created on the stack, they will be automatically freed in the end of their scope (at the closing } of the function in your case), you don't have to worry about them.
If you want to free them before the function ends, you need to re-implement the functions and probably allocate and free the memory for those variables by yourself, outside of the function. The way you pass them to the functions (either passing them as function arguments, or including them into the class) depends on you.
You can't. You should declare vp1 and vp2 in MainWindow as member variable.
As far as I understood the OP's requirement, he's looking how to interrupt someFunc1 or someFunc2 when the main window is closed.
Those functions run in the GUI thread, so the following statement is a misunderstanding
the program will still run in background until the loop is finished
What actually happens, the program runs until the function is complete, then the user action is processed by the framework. Therefore, when void MainWindow::closeEvent is executed, nothing is running in the background and resources are already freed.
OP should move someFunc1 and someFunc2 to a worker thread.
Theoretically, you might be able to do this using setjmp. Something along these lines:
#include "setjmp.h"
jmp_buf doNotAttempt;
jmp_buf badPractice;
int MainWindow::someFunc1()
{
VP vp1;
for (...) {
// do stuff
if (setjmp(doNotAttempt)) { /*free resources, then: */ longjmp(badPractice,1); }
}
return 0;
}
// [...]
void MainWindow::closeEvent(QCloseEvent *)
{
if (!setjmp(badPractice))
longjmp(doNotAttempt,1);
else
// do the same for your other loop
}
In practice, do not do this - it's a terrible idea for all kinds of reasons. As other folks have said, just declare vp1 and vp2 as member variables.
I am creating a C++ library for both Linux (with PThreads) and Windows (with their built-in WinThreads) which can be attached to any program, and needs to have a function called when the thread is exiting, similar to how atexit works for processes.
I know of pthread_cleanup_push and pthread_cleanup_pop for pthreads, but these do not work for me since they are macros that add another lexical scope, whereas I want to declare this function the first time my library is called into, and then allow the program itself to run its own code however it needs to. I haven't found anything similar in Windows whatsoever.
Note that this doesn't mean I want an outside thread to be alerted when the thread stops, or even that I can change the way the thread will be exited, since that is controlled by the program itself, my library is just attached, along for the ride.
So the question is: What is the best way, in this instance, for me to have a function I've written called when the thread closes, in either Windows or Linux, when I have no control over how the thread is created or destroyed?
For example in main program:
void* threadFunc(void* arg)
{
printf("Hello world!\n");
return NULL;
}
int main(int argc, char** argv)
{
int numThreads = 1;
pid_t* pids = NULL;
pids = (pid_t*) calloc(sizeof(pid_t), numThreads);
pthread_create(&ntid, NULL, threadFunc, &nVal);
pthreads[0] = ntid;
pthread_join(pthreads[0], NULL);
return 0;
}
In library:
void callMeOnExit()
{
printf("Exiting Thread!\n");
}
I would want for callMeOnExit to be called when the thread reaches return NULL; in this case, as well as when the main thread reaches the return 0;. Wrapping pthread_exit would work for other cases, and could be a solution, but I'd like a better one if possible.
If anyone has any ideas on how I might be able to do this, that would be great!
So after a few code reviews, we were able to find a much more elegant way to do this in Linux, which matches both what Windows does with Fibers (as Neeraj points out) as well as what I expected to find when I started looking into this issue.
The key is that pthread_key_create takes in, as the second argument, a pointer to a destructor, which is called when any thread which has initialized this TLS data dies. I was already using TLS elsewhere per thread, but a simple store into TLS would get you this feature as well to ensure it was called.
Change this:
pthread_create(&ntid, NULL, threadFunc, &nVal);
into:
struct exitInformData
{
void* (CB*)(void*);
void* data;
exitInformData(void* (cp*)(void*), void* dp): CB(cp) data(dp) {}
};
pthread_create(&ntid, NULL, exitInform, new exitInformData(&threadFunc, &nVal));
Then Add:
void* exitInform(void* data)
{
exitInformData* ei = reinterpret_cast<exitInformData*>(data);
void* r = (ei.CB)(ei.data); // Calls the function you want.
callMeOnExit(); // Calls the exit notification.
delete ei;
return r;
}
For Windows, you could try Fls Callbacks. They FLS system can be used to allocate per thread (ignore the 'fiber' part, each thread contains one fiber) storage. You get this callback to free the storage, but can do other things in the callback as well.
I found out that this has already been asked, although the solution given then may not be the same as what you want...
Another idea might be to simply extend from the pthread_t class/struct, and override the pthread_exit call to call another function as you want it to, then call the superclass pthread_exit
Things seem to be working but I'm unsure if this is the best way to go about it.
Basically I have an object which does asynchronous retrieval of data. This object has a vector of pointers which are allocated and de-allocated on the main thread. Using boost functions a process results callback is bound with one of the pointers in this vector. When it fires it will be running on some arbitrary thread and modify the data of the pointer.
Now I have critical sections around the parts that are pushing into the vector and erasing in case the asynch retrieval object is receives more requests but I'm wondering if I need some kind of guard in the callback that is modifying the pointer data as well.
Hopefully this slimmed down pseudo code makes things more clear:
class CAsyncRetriever
{
// typedefs of boost functions
class DataObject
{
// methods and members
};
public:
// Start single asynch retrieve with completion callback
void Start(SomeArgs)
{
SetupRetrieve(SomeArgs);
LaunchRetrieves();
}
protected:
void SetupRetrieve(SomeArgs)
{
// ...
{ // scope for data lock
boost::lock_guard<boost::mutex> lock(m_dataMutex);
m_inProgress.push_back(SmartPtr<DataObject>(new DataObject)));
m_callback = boost::bind(&CAsyncRetriever::ProcessResults, this, _1, m_inProgress.back());
}
// ...
}
void ProcessResults(DataObject* data)
{
// CALLED ON ANOTHER THREAD ... IS THIS SAFE?
data->m_SomeMember.SomeMethod();
data->m_SomeOtherMember = SomeStuff;
}
void Cleanup()
{
// ...
{ // scope for data lock
boost::lock_guard<boost::mutex> lock(m_dataMutex);
while(!m_inProgress.empty() && m_inProgress.front()->IsComplete())
m_inProgress.erase(m_inProgress.begin());
}
// ...
}
private:
std::vector<SmartPtr<DataObject>> m_inProgress;
boost::mutex m_dataMutex;
// other members
};
Edit: This is the actual code for the ProccessResults callback (plus comments for your benefit)
void ProcessResults(CRetrieveResults* pRetrieveResults, CRetData* data)
{
// pRetrieveResults is delayed binding that server passes in when invoking callback in thread pool
// data is raw pointer to ref counted object in vector of main thread (the DataObject* in question)
// if there was an error set the code on the atomic int in object
data->m_nErrorCode.Store_Release(pRetrieveResults->GetErrorCode());
// generic iterator of results bindings for generic sotrage class item
TPackedDataIterator<GenItem::CBind> dataItr(&pRetrieveResults->m_DataIter);
// namespace function which will iterate results and initialize generic storage
GenericStorage::InitializeItems<GenItem>(&data->m_items, dataItr, pRetrieveResults->m_nTotalResultsFound); // this is potentially time consuming depending on the amount of results and amount of columns that were bound in storage class definition (i.e.about 8 seconds for a million equipment items in release)
// atomic uint32_t that is incremented when kicking off async retrieve
m_nStarted.Decrement(); // this one is done processing
// boost function completion callback bound to interface that requested results
data->m_complete(data->m_items);
}
As it stands, it appears that the Cleanup code can destroy an object for which a callback to ProcessResults is in flight. That's going to cause problems when you deref the pointer in the callback.
My suggestion would be that you extend the semantics of your m_dataMutex to encompass the callback, though if the callback is long-running, or can happen inline within SetupRetrieve (sometimes this does happen - though here you state the callback is on a different thread, in which case you are OK) then things are more complex. Currently m_dataMutex is a bit confused about whether it controls access to the vector, or its contents, or both. With its scope clarified, ProcessResults could then be enhanced to verify validity of the payload within the lock.
No, it isn't safe.
ProcessResults operates on the data structure passed to it through DataObject. It indicates that you have shared state between different threads, and if both threads operate on the data structure concurrently you might have some trouble coming your way.
Updating a pointer should be an atomic operation, but you can use InterlockedExchangePointer (in Windows) to be sure. Not sure what the Linux equivalent would be.
The only consideration then would be if one thread is using an obsolete pointer. Does the other thread delete the object pointed to by the original pointer? If so, you have a definite problem.
I have an object that is called from two different threads and after it was called by both it destroys itself by "delete this".
How do I implement this thread-safe? Thread-safe means that the object never destroys itself exactly one time (it must destroys itself after the second callback).
I created some example code:
class IThreadCallBack
{
virtual void CallBack(int) = 0;
};
class M: public IThreadCallBack
{
private:
bool t1_finished, t2_finished;
public:
M(): t1_finished(false), t2_finished(false)
{
startMyThread(this, 1);
startMyThread(this, 2);
}
void CallBack(int id)
{
if (id == 1)
{
t1_finished = true;
}
else
{
t2_finished = true;
}
if (t1_finished && t2_finished)
{
delete this;
}
}
};
int main(int argc, char **argv) {
M* MObj = new M();
while(true);
}
Obviously I can't use a Mutex as member of the object and lock the delete, because this would also delete the Mutex. On the other hand, if I set a "toBeDeleted"-flag inside a mutex-protected area, where the finised-flag is set, I feel unsure if there are situations possible where the object isnt deleted at all.
Note that the thread-implementation makes sure that the callback method is called exactly one time per thread in any case.
Edit / Update:
What if I change Callback(..) to:
void CallBack(int id)
{
mMutex.Obtain()
if (id == 1)
{
t1_finished = true;
}
else
{
t2_finished = true;
}
bool both_finished = (t1_finished && t2_finished);
mMutex.Release();
if (both_finished)
{
delete this;
}
}
Can this considered to be safe? (with mMutex being a member of the m class?)
I think it is, if I don't access any member after releasing the mutex?!
Use Boost's Smart Pointer. It handles this automatically; your object won't have to delete itself, and it is thread safe.
Edit:
From the code you've posted above, I can't really say, need more info. But you could do it like this: each thread has a shared_ptr object and when the callback is called, you call shared_ptr::reset(). The last reset will delete M. Each shared_ptr could be stored with thread local storeage in each thread. So in essence, each thread is responsible for its own shared_ptr.
Instead of using two separate flags, you could consider setting a counter to the number of threads that you're waiting on and then using interlocked decrement.
Then you can be 100% sure that when the thread counter reaches 0, you're done and should clean up.
For more info on interlocked decrement on Windows, on Linux, and on Mac.
I once implemented something like this that avoided the ickiness and confusion of delete this entirely, by operating in the following way:
Start a thread that is responsible for deleting these sorts of shared objects, which waits on a condition
When the shared object is no longer being used, instead of deleting itself, have it insert itself into a thread-safe queue and signal the condition that the deleter thread is waiting on
When the deleter thread wakes up, it deletes everything in the queue
If your program has an event loop, you can avoid the creation of a separate thread for this by creating an event type that means "delete unused shared objects" and have some persistent object respond to this event in the same way that the deleter thread would in the above example.
I can't imagine that this is possible, especially within the class itself. The problem is two fold:
1) There's no way to notify the outside world not to call the object so the outside world has to be responsible for setting the pointer to 0 after calling "CallBack" iff the pointer was deleted.
2) Once two threads enter this function you are, and forgive my french, absolutely fucked. Calling a function on a deleted object is UB, just imagine what deleting an object while someone is in it results in.
I've never seen "delete this" as anything but an abomination. Doesn't mean it isn't sometimes, on VERY rare conditions, necessary. Problem is that people do it way too much and don't think about the consequences of such a design.
I don't think "to be deleted" is going to work well. It might work for two threads, but what about three? You can't protect the part of code that calls delete because you're deleting the protection (as you state) and because of the UB you'll inevitably cause. So the first goes through, sets the flag and aborts....which of the rest is going to call delete on the way out?
The more robust implementation would be to implement reference counting. For each thread you start, increase a counter; for each callback call decrease the counter and if the counter has reached zero, delete the object. You can lock the counter access, or you could use the Interlocked class to protect the counter access, though in that case you need to be careful with potential race between the first thread finishing and the second starting.
Update: And of course, I completely ignored the fact that this is C++. :-) You should use InterlockExchange to update the counter instead of the C# Interlocked class.