currently we are using asynchronous values very heavily.
Assume that I have a function which does something like this:
int do_something(const boost::posix_time::time_duration& sleep_time)
{
BOOST_MESSAGE("Sleeping a bit");
boost::this_thread::sleep(sleep_time);
BOOST_MESSAGE("Finished taking a nap");
return 42;
}
At some point in code we create a task which creates a future to such an int value which will be set by a packaged_task - like this (worker_queue is a boost::asio::io_service in this example):
boost::unique_future<int> createAsynchronousValue(const boost::posix_time::seconds& sleep)
{
boost::shared_ptr< boost::packaged_task<int> > task(
new boost::packaged_task<int>(boost::bind(do_something, sleep)));
boost::unique_future<int> ret = task->get_future();
// Trigger execution
working_queue.post(boost::bind(&boost::packaged_task<int>::operator (), task));
return boost::move(ret);
}
At another point in code I want to wrap this function to return some higher level object which should also be a future. I need a conversion function which takes the first value and transforms it to another value (in our actual code we have some layering and doing asynchronous RPC which returns futures to responses - these responses should be converted to futures to real objects, PODs or even void future to be able to wait on it or catch exceptions). So this is the conversion function in this example:
float converter(boost::shared_future<int> value)
{
BOOST_MESSAGE("Converting value " << value.get());
return 1.0f * value.get();
}
Then I thought of creating a lazy future as described in the Boost docs to do this conversion only if wanted:
void invoke_lazy_task(boost::packaged_task<float>& task)
{
try
{
task();
}
catch(boost::task_already_started&)
{}
}
And then I have a function (might be a higher level API) to create a wrapped future:
boost::unique_future<float> createWrappedFuture(const boost::posix_time::seconds& sleep)
{
boost::shared_future<int> int_future(createAsynchronousValue(sleep));
BOOST_MESSAGE("Creating converter task");
boost::packaged_task<float> wrapper(boost::bind(converter, int_future));
BOOST_MESSAGE("Setting wait callback");
wrapper.set_wait_callback(invoke_lazy_task);
BOOST_MESSAGE("Creating future to converter task");
boost::unique_future<float> future = wrapper.get_future();
BOOST_MESSAGE("Returning the future");
return boost::move(future);
}
At the end I want to be able to use it like this:
{
boost::unique_future<float> future = createWrappedFuture(boost::posix_time::seconds(1));
BOOST_MESSAGE("Waiting for the future");
future.wait();
BOOST_CHECK_EQUAL(future.get(), 42.0f);
}
But here I end up getting an exception about a broken promise. The reason seems to be pretty clear for me because the packaged_task which does the conversion goes out of scope.
So my questing is: How do I deal with such situations. How can I prevent the task from being destroyed? Is there a pattern for this?
Bests,
Ronny
You need to manage the lifetime of task object properly.
The most correct way is to return boost::packaged_task<float> instead of boost::unique_future<float> from createWrappedFuture(). The caller will be responsible to get future object and to prolongate task lifetime until future value is ready.
Or you can place task object into some 'pending' queue (global or class member) the similar way you did in createAsynchronousValue. But in this case you will need to explcitly manage task lifetime and remove it from queue after completion. So don't think this solution has advantages against returning task object itself.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 12 months ago.
This post was edited and submitted for review 12 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
As a novice C++ programmer who is very new to the concept of coroutines, I am trying to study and utilize the feature. Although there is explanation of coroutine here: What is a coroutine?
I am not yet sure when and how to use the coroutine. There was several example use cases provided, but those use cases had alternative solutions which could be implemented by pre- C++20 features: (ex:lazy computation of infinite sequence can be done by a class with private internal state variable).
Therefore I am seeking for any usecases that coroutines are particularly useful.
(From the image posted by Izana)
The word "coroutine" in this context is somewhat overloaded.
The general programming concept called a "coroutine" is what is described in the question you're referring to. C++20 added a language feature called "coroutines". While C++20's coroutines are somewhat similar to the programming concept, they're not all that similar.
At the ground level, both concepts are built on the ability of a function (or call stack of functions) to halt its execution and transfer control of execution to someone else. This is done with the expectation that control will eventually be given back to the function which has surrendered execution for the time being.
Where C++ coroutines diverge from the general concept is in their limitations and designed application.
co_await <expr> as a language construct does the following (in very broad strokes). It asks the expression <expr> if it has a result value to provide at the present time. If it does have a result, then the expression extracts the value and execution in the current function continues as normal.
If the expression cannot be resolved at the present time (perhaps because <expr> is waiting on an external resource or asynchronous process or something), then the current function suspends its execution and returns control to the function that called it. The coroutine also attaches itself to the <expr> object such that, once <expr> has the value, it should resume the coroutine's execution with said value. This resumption may or may not happen on the current thread.
So we see the pattern of C++20 coroutines. Control on the current thread returns to the caller, but resumption of the coroutine is determined by the nature of the value being co_awaited on. The caller gets an object that represents the future value the coroutine will produce but has not yet. The caller can wait on it to be ready or go do something else. It may also be able to itself co_await on the future value, creating a chain of coroutines to be resumed once a value is computed.
We also see the primary limitation: suspension applies only to the immediate function. You cannot suspend an entire stack of function calls unless each one of them individually does their own co_awaits.
C++ coroutines are a complex dance between 3 parties: the expression being awaited on, the code doing the awaiting, and the caller of the coroutine. Using co_yield essentially removes one of these three parties. Namely, the yielded expression is not expected to be involved. It's just a value which is going to be dumped to the caller. So yielding coroutines only involve the coroutine function and the caller. Yielding C++ coroutines are a bit closer to the conceptual idea of "coroutines".
Using a yielding coroutine to serve a number of values to the caller is generally called a "generator". How "simple" this makes your code depends on your generator framework (ie: the coroutine return type and its associated coroutine machinery). But good generator frameworks can expose range interfaces to the generation, allowing you to apply C++20 ranges to them and do all sorts of interesting compositions.
coroutine makes asynchronous programing more readable.
if there is no coroutine, we will use callback in asynchronous programing.
void callback(int data1, int data2)
{
// do something with data1, data2 after async op
// ...
}
void async_op(std::function<void()> callback)
{
// do some async operation
}
int main()
{
// do something
int data1;
int data2;
async_op(std::bind(callback, data1, data2));
return 0;
}
if there is a lot of callback, the code will very hard to read.
if we use coroutine the code will be
#include <coroutine>
#include <functional>
struct promise;
struct coroutine : std::coroutine_handle<promise>
{
using promise_type = struct promise;
};
struct promise
{
coroutine get_return_object() { return {coroutine::from_promise(*this)}; }
std::suspend_always initial_suspend() noexcept { return {}; }
std::suspend_never final_suspend() noexcept { return {}; }
void return_void() {}
void unhandled_exception() {}
};
struct awaitable
{
bool await_ready() { return false; }
void await_suspend(std::coroutine_handle<promise> h)
{
func();
}
void await_resume() { }
std::function<void()> func;
};
void async_op()
{
// do some async operation
}
coroutine callasync()
{
// do somethine
int data1;
int data2;
co_await awaitable(async_op);
// do something with data1, data2 after async op
// ...
}
int main()
{
callasync();
return 0;
}
it seems to me that those cases can be achieved by more simpler way: (ex:lazy computation of infinite sequence can be done by a class with private internal state variable).
Say you're writing a function that should interact with a remote server, creating a TCP connection, logging in with some multi-stage challenge/response protocol, making queries and getting replies (often in dribs and drabs over TCP), eventually disconnecting.... If you were writing a dedicated function to synchronously do that - as you might if you had a dedicated thread for this - then your code could very naturally reflect the stages of connection, request and response processing and disconnecting, just by the order of statements in your function and the use of flow control (for, while, switch, if). The data needed at various points would be localised in a scope reflecting its use, so it's easier for the programmer to know what's relevant at each point. This is easy to write, maintain and understand.
If, however, you wanted the interactions with the remote host to be non-blocking and to do other work in the thread while they were happening, you could make it event driven, using a class with private internal state variable[s] to track the state of your connection, as you suggest. But, your class would need not only the same variables the synchronous-function version would need (e.g. a buffer for assembling incoming messages), but also variables to track where in the overall connection/processing steps you left off (e.g. enum state { tcp_connection_pending, awaiting_challenge, awaiting_login_confirmation, awaiting_reply_to_message_x, awaiting_reply_to_message_y }, counters, an output buffer), and you'd need more complex code to jump back in to the right processing step. You no longer have localisation of data with its use in specific statement blocks - and instead have a flat hodge-podge of class data members and additional mental overhead in understanding which parts of the code care about them, when they're valid or not etc.. It's all spaghetti. (The State/Strategy design pattern can help structure this better, but sometimes with runtime for virtual dispatch, dynamic allocation etc..)
Co-routines provide a best-of-both-worlds solution: you can think of them as providing an additional stack for the call to what looks very much like the concise and easy/fast-to-write/maintain/understand synchronous-processing function initially explained above, but with the ability to suspend and resume instead of blocking, so the same thread can progress the connection handling as well as do other work (it could even invoke the coroutine thousands of times to handle thousands of remote connections, switching efficiently between them to keep work happening as network I/O happens).
Harkening back to your "lazy computation of infinite sequence" - in one sense, a coroutine may be overkill for this, as there may not be multiple processing stages/states, or subsets of data members that are relevant therein. There are some benefits to consistency though - if providing e.g. pipelines of coroutines.
Just as lambda in C++ avoid you to define classes and function when you want to capture the context, coroutines also avoid you to define a class and a relatively complex function or set of functions when you want to be able to suspend and resume the execution of a function.
But contrarily to lambda, to use and define coroutines, you need a support library, and C++20 is missing that aspect in the standard library. That has for consequence that most if not all explanations of C++ coroutines target a low level interface and explain as much if not more how to build the support library as how to use it, giving the impression that the usage will be more complex than it is. You get a "how to implement std::vector" kind of description when you want a "how to use std::vector".
To take the example of cppreference.com, coroutines allows you to write
Generator<uint64_t>
fibonacci_sequence(unsigned n)
{
if (n==0)
co_return;
if (n>94)
throw std::runtime_error("Too big Fibonacci sequence. Elements would overflow.");
co_yield 0;
if (n==1)
co_return;
co_yield 1;
if (n==2)
co_return;
uint64_t a=0;
uint64_t b=1;
for (unsigned i = 2; i < n;i++)
{
uint64_t s=a+b;
co_yield s;
a=b;
b=s;
}
}
instead (I didn't pass that to a compiler, there must be errors in it) of
class FibonacciSequence {
public:
FibonacciSequence(unsigned n);
bool done() const;
void next();
uint64_t value() const;
private:
unsigned n;
unsigned state;
unsigned i;
uint64_t mValue;
uint64_t a;
uint64_t b;
uint64_t s;
};
FibonacciSequence::FibonacciSequence(unsigned pN)
: n(pN), state(1)
{}
bool FibonacciSequence::done() const
{
return state == 0;
}
uint64_t FibonacciSequence::value() const
{
return mValue;
}
void FibonacciSequence::next() const
{
for (;;) {
switch (state) {
case 0:
return;
case 1:
if (n==0) {
state = 0;
return;
}
if (n>94)
throw std::runtime_error("Too big Fibonacci sequence. Elements would overflow.");
mValue = 0;
state = 2;
return;
case 2:
if (n==1) {
state = 0;
return;
}
mValue = 1;
state = 3;
return;
case 3:
if (n==2) {
state = 0;
return;
}
a=0;
b=1;
i=2;
state = 4;
break;
case 4:
if (i < n) {
s=a+b;
value = s;
state = 5;
return;
} else {
state = 6;
}
break;
case 5:
a=b;
b=s;
state = 4;
break;
case 6:
state = 0;
return;
}
}
}
FibonacciSequence fibonacci_sequence(unsigned n) {
return FibonacciSequence(n);
}
Obviously something simpler could be used, but I wanted to show how the mapping could be done automatically, without any kind of optimization. And I've side stepped the additional complexity of allocation and deallocation.
That transformation is useful for generators like here. It is more generally useful when you want a kind of collaborative concurrency, with or without parallelism. Sadly, for such things, you need even more library support (including a scheduler to chose the coroutine which will be executed next in a given context) and I've not see relatively simple examples of that showing the underlying concepts while avoiding to be drown in implementation details.
BACKGROUND
After being convinced that C++ stackless coroutines are pretty awesome. I have been implementing coroutines for my codebase, and realised an oddity in final_suspend.
CONTEXT
Let’s say you have the following final_suspend function:
final_awaitable final_suspend() noexcept
{
return {};
}
And, final_awaitable was implemented as follows:
struct final_awaitable
{
bool await_ready() const noexcept
{
return false;
}
default_handle_t await_suspend( promise_handle_t h ) const noexcept
{
return h.promise().continuation();
}
void await_resume() const noexcept {}
};
If continuation here was retrieved atomically from task queue and the task queue is potentially empty (which could occur any time between await_ready and await_suspend) then await_suspend must be able to return a blank continuation.
It is my understanding that when await_suspend returns a handle, the returned handle is immediately resumed (5.1 in N4775 draft). So, if there was no avaliable continuation here, any application crashes as resume is called on an invalid coroutine handle after receiving it from await_suspend.
The following is the execution order:
final_suspend Constructs final_awaitable.
final_awaitable::await_ready Returns false, triggering await_suspend.
final_awaitable::await_suspend Returns a continuation (or empty continuation).
continuation::resume This could be null if a retrieved from an empty work queue.
No check appears to be specified for a valid handle (as it is if await_suspend returns bool).
QUESTION
How are you suppose to add a worker queue to await_suspend without a lock in this case? Looking for a scalable solution.
Why doesn't the underlying coroutine implementation check for a valid handle.
A contrived example causing the crash is here.
SOLUTION IDEAS
Using a dummy task that is an infinite loop of co_yield. This is sort of wasted cycles and I would prefer not to have to do this, also I would need to create seperate handles to the dummy task for every thread of execution and that just seems silly.
Creating a specialisation of std::coroutine_handle where resume does nothing, returning an instance of that handle. I'd prefer not specialise the standard library. This also doesn't work because coroutine_handle<> doesn't have done() and resume() as virtual.
EDIT 1 16/03/2020 Call continuation() to atomically retrieve a continuation and store the result in the final_awaitable structure, await_ready world return true if there wasn't a continuation available. If there was a continuation available await_ready would return false, await_suspend would then be called and the continuation returned (immediately resuming it).
This doesn't work because the value returned by a task is stored in the coroutine frame and if the value is still needed then the coroutine frame must not be destroyed. In this case it is destroyed after await_resume is called on the final_awaitable.
This is only an issue if the task is the last in a chain of continuations.
EDIT 2 - 20/03/2020 Ignore the possibility of returning a usable co routine handle from await_suspend. Only resume continuation from top level co routine. This doesn't appear as efficient.
01/04/2020
I still haven't found a solution that doesn't have substantial disadvantages. I suppose the reason I'm caught up on this is because await_suspend appears to be designed to solve this exact problem (being able to return a corountine_handle). I just cannot figure out the pattern that was intended.
You can use std::noop_coroutine as a blank continuation.
What about: (Just a large comment in fact.)
struct final_awaitable
{
bool await_ready() const noexcept
{
return false;
}
bool await_suspend( promise_handle_t h ) const noexcept
{
auto continuation = h.promise().atomically_pop_a_continuation();
if (continuation)
continuation.handle().resume();
return true;//or whatever is meaningfull for your case.
}
void await_resume() const noexcept {}
};
I'm just getting into concurrent programming. Most probably my issue is very common, but since I can't find a good name for it, I can't google it.
I have a C++ UWP application where I try to apply MVVM pattern, but I guess that the pattern or even being UWP is not relevant.
First, I have a service interface that exposes an operation:
struct IService
{
virtual task<int> Operation() = 0;
};
Of course, I provide a concrete implementation, but it is not relevant for this discussion. The operation is potentially long-running: it makes an HTTP request.
Then I have a class that uses the service (again, irrelevant details omitted):
class ViewModel
{
unique_ptr<IService> service;
public:
task<void> Refresh();
};
I use coroutines:
task<void> ViewModel::Refresh()
{
auto result = co_await service->Operation();
// use result to update UI
}
The Refresh function is invoked on timer every minute, or in response to a user request. What I want is: if a Refresh operation is already in progress when a new one is started or requested, then abandon the second one and just wait for the first one to finish (or time out). In other words, I don't want to queue all the calls to Refresh - if a call is already in progress, I prefer to skip a call until the next timer tick.
My attempt (probably very naive) was:
mutex refresh;
task<void> ViewModel::Refresh()
{
unique_lock<mutex> lock(refresh, try_to_lock);
if (!lock)
{
// lock.release(); commented out as harmless but useless => irrelevant
co_return;
}
auto result = co_await service->Operation();
// use result to update UI
}
Edit after the original post: I commented out the line in the code snippet above, as it makes no difference. The issue is still the same.
But of course an assertion fails: unlock of unowned mutex. I guess that the problem is the unlock of mutex by unique_lock destructor, which happens in the continuation of the coroutine and on a different thread (other than the one it was originally locked on).
Using Visual C++ 2017.
use std::atomic_bool:
std::atomic_bool isRunning = false;
if (isRunning.exchange(true, std::memory_order_acq_rel) == false){
try{
auto result = co_await Refresh();
isRunning.store(false, std::memory_order_release);
//use result
}
catch(...){
isRunning.store(false, std::memory_order_release);
throw;
}
}
Two possible improvements : wrap isRunning.store in a RAII class and use std::shared_ptr<std::atomic_bool> if the lifetime if the atomic_bool is scoped.
The story :
I make use of the QtConcurrent API for every "long" operation in my application.
It works pretty well, but I face some problems with the QObjects creation.
Consider this piece of code, which use a thread to create a "Foo" object :
QFuture<Foo*> = QtConcurrent::run([=]()
{
Data* data = /*long operation to acquire the data*/
Foo* result = new Foo(data);
return result;
});
It works well, but if the "Foo" class is derived from QObject class, the "result" instance belongs to the QThread who has created the object.
So to use properly signal/slot with the "result" instance, one should do something like this :
QFuture<Foo*> = QtConcurrent::run([=]()
{
Data* data = /*long operation to acquire the data*/
Foo* result = new Foo(data);
// Move "result" to the main application thread
result->moveToThread(qApp->thread());
return result;
});
Now, all works as exepected, and I think this is the normal behaviour and the nominal solution.
The problem :
I have a lot of this kind of code, which sometimes create objects, which can also create objects. Most of them are created properly with a "moveToThread" call.
But sometimes, I miss one "moveToThread" call.
And then, a lot of things look like they doesn't work (because this object slots are "broken"), without any Qt warning.
Now, I sometimes spend a lot of time to figure why someting doesn't work, before understanding it's only because the slots are no more called on a particular object instance.
The question :
Is there any way to help me to prevent/detect/debug this kind of situation ?
For example :
having a warning logged every time a QThread is deleted but there are objects alive who belongs to it ?
having a warning logged every time a signal is emitted to an object which QThread is deleted ?
having a warning logged every time a signal is emitted to an object (in another thread) and not processed before a timeout ?
Thanks
It is possible to track an object's movement among threads. Just before an object is moved to the new thread, it is sent a ThreadChange event. You can filter that event and have your code run to take a note of when an object leaves a thread. But it's too early at that point to know of whether the object goes anywhere. To detect that, you need to post a metacall (see this question) to the object's queue to be executed as soon as the object's event processing resumes in the new thread. You'd also attach to QThread::finished to get a chance to look through your object list and check if any of them live on the thread that's about to die.
But all this is fairly involved: each thread will need its own tracker/filter object, as event filters must live in the object's thread. We're probably talking of more than 200 lines of code to do it right, handling all corner cases.
Instead, you can leverage RAII and hold your objects using handles that manage thread affinity as a resource (because it is one!):
// https://github.com/KubaO/stackoverflown/tree/master/questions/thread-track-38611886
#include <QtConcurrent>
template <typename T>
class MainResult {
Q_DISABLE_COPY(MainResult)
T * m_obj;
public:
template<typename... Args>
MainResult(Args&&... args) : m_obj{ new T(std::forward<Args>(args)...) } {}
MainResult(T * obj) : m_obj{obj} {}
T* operator->() const { return m_obj; }
operator T*() const { return m_obj; }
T* operator()() const { return m_obj; }
~MainResult() { m_obj->moveToThread(qApp->thread()); }
};
struct Foo : QObject { Foo(int) {} };
You can return a MainResult by value, but the return type of the functor must be explicitly given:
QFuture<Foo*> test1() {
return QtConcurrent::run([=]()->Foo*{ // explicit return type
MainResult<Foo> obj{1};
obj->setObjectName("Hello");
return obj; // return by value
});
}
Alternatively, you can return the result of calling MainResult; it's a functor itself to save a bit of typing but this might be considered a hack and perhaps you should convert operator()() to a method with a short name.
QFuture<Foo*> test2() {
return QtConcurrent::run([=](){ // deduced return type
MainResult<Foo> obj{1};
obj->setObjectName("Hello");
return obj(); // return by call
});
}
While it's preferable to construct the object along with the handle, it's also possible to pass an instance pointer to the handle's constructor:
MainResult<Foo> obj{ new Foo{1} };
I have a multithreaded application that runs using a custom thread pool class. The threads all execute the same function, with different parameters.
These parameters are given to the threadpool class the following way:
// jobParams is a struct of int, double, etc...
jobParams* params = new jobParams;
params.value1 = 2;
params.value2 = 3;
int jobId = 0;
threadPool.addJob(jobId, params);
As soon as a thread has nothing to do, it gets the next parameters and runs the job function. I decided to take care of the deletion of the parameters in the threadpool class:
ThreadPool::~ThreadPool() {
for (int i = 0; i < this->jobs.size(); ++i) {
delete this->jobs[i].params;
}
}
However, when doing so, I sometimes get a heap corruption error:
Invalid Address specified to RtlFreeHeap
The strange thing is that in one case it works perfectly, but in another program it crashes with this error. I tried deleting the pointer at other places: in the thread after the execution of the job function (I get the same heap corruption error) or at the end of the job function itself (no error in this case).
I don't understand how deleting the same pointers (I checked, the addresses are the same) from different places changes anything. Does this have anything to do with the fact that it's multithreaded?
I do have a critical section that handles the access to the parameters. I don't think the problem is about synchronized access. Anyway, the destructor is called only once all threads are done, and I don't delete any pointer anywhere else. Can pointer be deleted automatically?
As for my code. The list of jobs is a queue of a structure, composed of the id of a job (used to be able to get the output of a specific job later) and the parameters.
getNextJob() is called by the threads (they have a pointer to the ThreadPool) each time they finished to execute their last job.
void ThreadPool::addJob(int jobId, void* params) {
jobData job; // jobData is a simple struct { int, void* }
job.ID = jobId;
job.params = params;
// insert parameters in the list
this->jobs.push(job);
}
jobData* ThreadPool::getNextJob() {
// get the data of the next job
jobData* job = NULL;
// we don't want to start a same job twice,
// so we make sure that we are only one at a time in this part
WaitForSingleObject(this->mutex, INFINITE);
if (!this->jobs.empty())
{
job = &(this->jobs.front());
this->jobs.pop();
}
// we're done with the exclusive part !
ReleaseMutex(this->mutex);
return job;
}
Let's turn this on its head: Why are you using pointers at all?
class Params
{
int value1, value2; // etc...
}
class ThreadJob
{
int jobID; // or whatever...
Params params;
}
class ThreadPool
{
std::list<ThreadJob> jobs;
void addJob(int job, const Params & p)
{
ThreadJob j(job, p);
jobs.push_back(j);
}
}
No new, delete or pointers... Obviously some of the implementation details may be cocked, but you get the overall picture.
Thanks for extra code. Now we can see a problem -
in getNextJob
if (!this->jobs.empty())
{
job = &(this->jobs.front());
this->jobs.pop();
After the "pop", the memory pointed to by 'job' is undefined. Don't use a reference, copy the actual data!
Try something like this (it's still generic, because JobData is generic):
jobData ThreadPool::getNextJob() // get the data of the next job
{
jobData job;
WaitForSingleObject(this->mutex, INFINITE);
if (!this->jobs.empty())
{
job = (this->jobs.front());
this->jobs.pop();
}
// we're done with the exclusive part !
ReleaseMutex(this->mutex);
return job;
}
Also, while you're adding jobs to the queue you must ALSO lock the mutex, to prevent list corruption. AFAIK std::lists are NOT inherently thread-safe...?
Using operator delete on pointer to void results in undefined behavior according to the specification.
Chapter 5.3.5 of the draft of the C++ specification. Paragraph 3.
In the first alternative (delete object), if the static type of the operand is different from its dynamic type, the static type shall be a base class of the operand’s dynamic type and the static type shall have a virtual destructor or the behavior is undefined. In the second alternative (delete array) if the dynamic type of the object to be deleted differs from its static type, the behavior is undefined.73)
And corresponding footnote.
This implies that an object cannot be deleted using a pointer of type void* because there are no objects of type void
All access to the job queue must be synchronized, i.e. performed only from 1 thread at a time by locking the job queue prior to access. Do you already have a critical section or some similar pattern to guard the shared resource? Synchronization issues often lead to weird behaviour and bugs which are hard to reproduce.
It's hard to give a definitive answer with this amount of code. But generally speaking, multithreaded programming is all about synchronizing access to data that might be accessed from multiple threads. If there is no long or other synchronization primitive protecting access to the threadpool class itself, then you can potentially have multiple threads reaching your deletion loop at the same time, at which point you're pretty much guaranteed to be double-freeing memory.
The reason you're getting no crash when you delete a job's params at the end of the job function might be because access to a single job's params is already implicitly serialized by your work queue. Or you might just be getting lucky. In either case, it's best to think about locks and synchronization primitive as not being something that protects code, but as being something that protects data (I've always thought the term "critical section" was a bit misleading here, as it tends to lead people to think of a 'section of lines of code' rather than in terms of data access).. In this case, since you want to access your jobs data from multiple thread, you need to be protecting it via a lock or some other synchronization primitive.
If you try to delete an object twice, the second time will fail, because the heap is already freed. This is the normal behavior.
Now, since you are in a multithreading context... it might be that the deletions are done "almost" in parallel, which might avoid the error on the second deletion, because the first one is not yet finalized.
Use smart pointers or other RAII to handle your memory.
If you have access to boost or tr1 lib you can do something like this.
class ThreadPool
{
typedef pair<int, function<void (void)> > Job;
list< Job > jobList;
HANDLE mutex;
public:
void addJob(int jobid, const function<void (void)>& job) {
jobList.push_back( make_pair(jobid, job) );
}
Job getNextJob() {
struct MutexLocker {
HANDLE& mutex;
MutexLocker(HANDLE& mutex) : mutex(mutex){
WaitForSingleObject(mutex, INFINITE);
}
~MutexLocker() {
ReleaseMutex(mutex);
}
};
Job job = make_pair(-1, function<void (void)>());
const MutexLocker locker(this->mutex);
if (!this->jobList.empty()) {
job = this->jobList.front();
this->jobList.pop();
}
return job;
}
};
void workWithDouble( double value );
void workWithInt( int value );
void workWithValues( int, double);
void test() {
ThreadPool pool;
//...
pool.addJob( 0, bind(&workWithDouble, 0.1));
pool.addJob( 1, bind(&workWithInt, 1));
pool.addJob( 2, bind(&workWithValues, 1, 0.1));
}