I have a main thread that does some work and it delegates an other thread asynchronously to send some data to another process.
I used a generic queue of shared_ptr(T), the main thread pushes into queue and the second thread pops the data and processes it.
I pushes many data types (i.e. shared_ptr(A), shared_ptr(B)) deriving from T.
class A : public T{};
class B : public T{};
What's the best way (efficient) to know the derived class from the generic type.
PS: Dynamic cast is not the best solution.
The producer sends data type.
The consumer pops the queue and does his job depending on the passed data.
The consumer uses this data when calling a given function.
The consumer should detect the derived class of the passed parameter to delegate the appropriate function to call.
void process(shared_ptr<T> ptr)
{
if (type(ptr) == A) do work using A..
if (type(ptr) == B) do staff using B..
...
}
Thank you for your help and time.
As an alternative to queuing instances of std::shared_ptr<T> you could queue instances of a callable type such as std::function<void()>.
#include <functional>
using task_type = std::function<void()>;
queue_type<task_type> queue;
You state that you want to do something like...
if (type(ptr) == A) do work using A..
if (type(ptr) == B) do staff using B..
Assuming the two functions in question are do_work and do_stuff the client code would then be...
auto dataA = std::make_shared<A>(...);
queue.enqueue([dataA]()
{
do_work(dataA);
});
auto dataB = std::make_shared<B>(...);
queue.enqueue([dataB]()
{
do_stuff(dataB);
});
And the consumer code...
while (true) {
try {
/*
* Dequeue the next task.
*/
auto f = queue.dequeue();
/*
* Run it.
*/
f();
catch (...) {
/*
* Handle errors here.
*/
}
}
Related
I apologize for the ambiguous title, but I'll try to elaborate further here:
I have an application which includes (among others) a control, and TCP server classes.
Communication between the TCP and control class is done via this implementation:
#include <boost/signals2.hpp>
// T - Observable object type
// S - Function signature
template <class T, typename S> class observer {
using F = std::function<S>;
public:
void register_notifier(T &obj, F f)
{
connection_ = obj.connect_notifier(std::forward<F>(f));
}
protected:
boost::signals2::scoped_connection connection_;
};
// S - Function signature
template <typename S> class observable {
public:
boost::signals2::scoped_connection connect_notifier(std::function<S> f)
{
return notify.connect(std::move(f));
}
protected:
boost::signals2::signal<S> notify;
};
Where the TCP server class is the observable, and the control class is the observer.
The TCP server is running on a separate thread from the control class, and uses boost::asio::async_read. Whenever a message is received, the server object sends a notification via the 'notify' member, thus triggering the callback registered in the control class, and then waits to read the next message.
The problem is that I need to somehow safely and efficiently store the data currently stored in the TCP server buffer and pass it to the control class before it's overridden by the next message.
i.e. :
inline void ctl::tcp::server::handle_data_read(/* ... */)
{
if (!error) {
/* .... */
notify(/* ? */); // Using a pointer to the buffer
// would obviously fail as it
// is overridden in the next read
}
/* .... */
}
Those were my ideas for a solution so far:
Allocating heap memory and passing a pointer to it using
unique_ptr, but I'm not sure if boost.signals2 is move-aware.
Use an
unordered map (shared between the objects) that maps an integer index to a unique_ptr of the
data type (std::unordered_map<int, std::unique_ptr<data_type>>),
then only pass the index of the element, and 'pop' it in the control
class callback, but it feels like an overkill.
What I'm really looking for is an idea for a simple and efficient solution to pass the TCP buffer contents for each message between the threads.
Note I'm also open for suggestions to redesign my communication method between the objects if it's completely wrong.
I have a thread pool that I use to execute many tiny jobs (millions of jobs, dozens/hundreds of milliseconds each). The jobs are passed in the form of either:
std::bind(&fn, arg1, arg2, arg3...)
or
[&](){fn(arg1, arg2, arg3...);}
with the thread pool taking them like this:
std::queue<std::function<void(void)>> queue;
void addJob(std::function<void(void)> fn)
{
queue.emplace_back(std::move(fn));
}
Pretty standard stuff....except that I've noticed a bottleneck where if jobs execute in a fast enough time (less than a millisecond), the conversion from lambda/binder to std::function in the addJob function actually takes longer than execution of the jobs themselves. After doing some reading, std::function is notoriously slow and so my bottleneck isn't necessarily unexpected.
Is there a faster way of doing this type of thing? I've looked into drop-in std::function replacements but they either weren't compatible with my compiler or weren't faster. I've also looked into "fast delegates" by Don Clugston but they don't seem to allow the passing of arguments along with functions (maybe I don't understand them correctly?).
I'm compiling with VS2015u3, and the functions passed to the jobs are all static, with their arguments being either ints/floats or pointers to other objects.
Have a separate queue for each of the task types - you probably don't have tens of thousands of task types. Each of these can be e.g. a static member of your tasks. Then addJob() is actually the ctor of Task and it's perfectly type-safe.
Then define a compile-time list of your task types and visit it via template metaprogramming (for_each). It'll be way faster as you don't need any virtual call fnptr / std::function<> to achieve this.
This will only work if your tuple code sees all the Task classes (so you can't e.g. add a new descendant of Task to an already running executable by loading the image from disc - hope that's a non-issue).
template<typename D> // CRTP on D
class Task {
public:
// you might want to static_assert at some point that D is in TaskTypeList
Task() : it_(tasks_.end()) {} // call enqueue() in descendant
~Task() {
// add your favorite lock here
if (queued()) {
tasks_.erase(it_);
}
}
bool queued() const { return it_ != tasks_.end(); }
static size_t ExecNext() {
if (!tasks_.empty()) {
// add your favorite lock here
auto&& itTask = tasks_.begin();
tasks_.pop_front();
// release lock
(*itTask)();
itTask->it_ = tasks_.end();
}
return tasks_.size();
}
protected:
void enqueue() const
{
// add your favorite lock here
tasks_.push_back(static_cast<D*>(this));
it_ = tasks_.rbegin();
}
private:
std::list<D*>::iterator it_;
static std::list<D*> tasks_; // you can have one per thread, too - then you don't need locking, but tasks are assigned to threads statically
};
struct MyTask : Task<MyTask> {
MyTask() { enqueue(); } // call enqueue only when the class is ready
void operator()() { /* add task here */ }
// ...
};
struct MyTask2; // etc.
template<typename...>
struct list_ {};
using TaskTypeList = list_<MyTask, MyTask2>;
void thread_pocess(list_<>) {}
template<typename TaskType, typename... TaskTypes>
void thread_pocess(list_<TaskType, TaskTypes...>)
{
TaskType::ExecNext();
thread_process(list_<TaskTypes...>());
}
void thread_process(void*)
{
for (;;) {
thread_process(TaskTypeList());
}
}
There's a lot to tune on this code: different threads should start from different parts of the queue (or one would use a ring, or several queues and either static/dynamic assignment to threads), you'd send it to sleep when there are absolutely no tasks, one could have an enum for the tasks, etc.
Note that this can't be used with arbitrary lambdas: you need to list task types. You need to 'communicate' the lambda type out of the function where you declare it (e.g. by returning `std::make_pair(retval, list_) and sometimes it's not easy. However, you can always convert a lambda to a functor, which is straightforward - just ugly.
Recently I'm thinking a high performance event-driven multi-threads framework using c++11. And it mainly takes c++11 facilities such as std::thread, std::condition_variable, std::mutex, std::shared_ptr etc into consideration. In general, this framework has three basic components: job, worker and streamline, well, it seems to be a real factory. When user construct his business model in server end, he just needs to consider the data and its processor. Once the model is established, user only needs to construct data class inherited job and processor class inherited worker.
For example:
class Data : public job {};
class Processsor : public worker {};
When server get data, it just new a Data object through auto data = std::make_shared<Data>() in the data source callback thread and call the streamline. job_dispatch to transfer the processor and data to other thread. Of course user doesn't have to think to free memory. The streamline. job_dispatch mainly do below stuff:
void evd_thread_pool::job_dispatch(std::shared_ptr<evd_thread_job> job) {
auto task = std::make_shared<evd_task_wrap>(job);
task->worker = streamline.worker;
// worker has been registered in streamline first of all
{
std::unique_lock<std::mutex> lck(streamline.mutex);
streamline.task_list.push_back(std::move(task));
}
streamline.cv.notify_all();
}
The evd_task_wrap used in the job_dispatch defined as:
struct evd_task_wrap {
std::shared_ptr<evd_thread_job> order;
std::shared_ptr<evd_thread_processor> worker;
evd_task_wrap(std::shared_ptr<evd_thread_job>& o)
:order(o) {}
};
Finally the task_wrap will be dispatched into the processing thread through task_list that is a std::list object. And the processing thread mainly do the stuff as:
void evd_factory_impl::thread_proc() {
std::shared_ptr<evd_task_wrap> wrap = nullptr;
while (true) {
{
std::unique_lock<std::mutex> lck(streamline.mutex);
if (streamline.task_list.empty())
streamline.cv.wait(lck,
[&]()->bool{return !streamline.task_list.empty();});
wrap = std::move(streamline.task_list.front());
streamline.task_list.pop_front();
}
if (-1 == wrap->order->get_type())
break;
wrap->worker->process_task(wrap->order);
wrap.reset();
}
}
But I don't know why the process will often crash in the thread_proc function. And the coredump prompt that sometimes the wrap is a empty shared_ptr or segment fault happened in _Sp_counted_ptr_inplace::_M_dispose that is called in wrap.reset(). And I supposed the shared_ptr has the thread synchronous problem in this scenario while I know the control block in shared_ptr is thread-safety. And of course the shared_ptr in job_dispatch and thread_proc is different shared_ptr object even though they point to the same storage. Does anyone has more specific suggestion on how to solve this problem? Or if there exists similar lightweight framework with automatic memory management using c++11
The example of process_task such as:
void log_handle::process_task(std::shared_ptr<crx::evd_thread_job> job) {
auto j = std::dynamic_pointer_cast<log_job>(job);
j->log->Printf(0, j->print_str.c_str());
write(STDOUT_FILENO, j->print_str.c_str(), j->print_str.size());
}
class log_factory {
public:
log_factory(const std::string& name);
virtual ~log_factory();
void print_ts(const char *format, ...) { //here dispatch the job
char log_buf[4096] = {0};
va_list args;
va_start(args, format);
vsprintf(log_buf, format, args);
va_end(args);
auto job = std::make_shared<log_job>(log_buf, &m_log);
m_log_th.job_dispatch(job);
}
public:
E15_Log m_log;
std::shared_ptr<log_handle> m_log_handle;
crx::evd_thread_pool m_log_th;
};
I detected a problem in your code, which may or may not be related:
You use notify_all from your condition variable. That will awaken ALL threads from sleep. It is OK if you wrap your wait in a while loop, like:
while (streamline.task_list.empty())
streamline.cv.wait(lck, [&]()->bool{return !streamline.task_list.empty();});
But since you are using an if, all threads leave the wait. If you dispatch a single product and having several consumer threads, all but one thread will call wrap = std::move(streamline.task_list.front()); while the tasklist is empty and cause UB.
The story :
I make use of the QtConcurrent API for every "long" operation in my application.
It works pretty well, but I face some problems with the QObjects creation.
Consider this piece of code, which use a thread to create a "Foo" object :
QFuture<Foo*> = QtConcurrent::run([=]()
{
Data* data = /*long operation to acquire the data*/
Foo* result = new Foo(data);
return result;
});
It works well, but if the "Foo" class is derived from QObject class, the "result" instance belongs to the QThread who has created the object.
So to use properly signal/slot with the "result" instance, one should do something like this :
QFuture<Foo*> = QtConcurrent::run([=]()
{
Data* data = /*long operation to acquire the data*/
Foo* result = new Foo(data);
// Move "result" to the main application thread
result->moveToThread(qApp->thread());
return result;
});
Now, all works as exepected, and I think this is the normal behaviour and the nominal solution.
The problem :
I have a lot of this kind of code, which sometimes create objects, which can also create objects. Most of them are created properly with a "moveToThread" call.
But sometimes, I miss one "moveToThread" call.
And then, a lot of things look like they doesn't work (because this object slots are "broken"), without any Qt warning.
Now, I sometimes spend a lot of time to figure why someting doesn't work, before understanding it's only because the slots are no more called on a particular object instance.
The question :
Is there any way to help me to prevent/detect/debug this kind of situation ?
For example :
having a warning logged every time a QThread is deleted but there are objects alive who belongs to it ?
having a warning logged every time a signal is emitted to an object which QThread is deleted ?
having a warning logged every time a signal is emitted to an object (in another thread) and not processed before a timeout ?
Thanks
It is possible to track an object's movement among threads. Just before an object is moved to the new thread, it is sent a ThreadChange event. You can filter that event and have your code run to take a note of when an object leaves a thread. But it's too early at that point to know of whether the object goes anywhere. To detect that, you need to post a metacall (see this question) to the object's queue to be executed as soon as the object's event processing resumes in the new thread. You'd also attach to QThread::finished to get a chance to look through your object list and check if any of them live on the thread that's about to die.
But all this is fairly involved: each thread will need its own tracker/filter object, as event filters must live in the object's thread. We're probably talking of more than 200 lines of code to do it right, handling all corner cases.
Instead, you can leverage RAII and hold your objects using handles that manage thread affinity as a resource (because it is one!):
// https://github.com/KubaO/stackoverflown/tree/master/questions/thread-track-38611886
#include <QtConcurrent>
template <typename T>
class MainResult {
Q_DISABLE_COPY(MainResult)
T * m_obj;
public:
template<typename... Args>
MainResult(Args&&... args) : m_obj{ new T(std::forward<Args>(args)...) } {}
MainResult(T * obj) : m_obj{obj} {}
T* operator->() const { return m_obj; }
operator T*() const { return m_obj; }
T* operator()() const { return m_obj; }
~MainResult() { m_obj->moveToThread(qApp->thread()); }
};
struct Foo : QObject { Foo(int) {} };
You can return a MainResult by value, but the return type of the functor must be explicitly given:
QFuture<Foo*> test1() {
return QtConcurrent::run([=]()->Foo*{ // explicit return type
MainResult<Foo> obj{1};
obj->setObjectName("Hello");
return obj; // return by value
});
}
Alternatively, you can return the result of calling MainResult; it's a functor itself to save a bit of typing but this might be considered a hack and perhaps you should convert operator()() to a method with a short name.
QFuture<Foo*> test2() {
return QtConcurrent::run([=](){ // deduced return type
MainResult<Foo> obj{1};
obj->setObjectName("Hello");
return obj(); // return by call
});
}
While it's preferable to construct the object along with the handle, it's also possible to pass an instance pointer to the handle's constructor:
MainResult<Foo> obj{ new Foo{1} };
Is it possible to store a templated class like
template <typename rtn, typename arg>
class BufferAccessor {
public:
int ThreadID;
virtual rtn do_work(arg) = 0;
};
BufferAccessor<void,int> access1;
BufferAccessor<int,void> access2;
in the same container like a vector or list
edit:
The purpose for this is I am trying to make a circular buffer where the objects that want to use the buffer need to register with the buffer. The buffer will store a boost::shared_ptr to the accessor objects and generate a callback to there functions that will push or pull data to/from the buffer. The callback will be used in a generic thread worker function that I have created similar to a thread pool with the fact that they need to access a shared memory object. Below is some code I have typed up that might help illustrate what I am trying to do, but it hasn't been compiled it yet and this is also my first time using bind, function, multi-threading
typedef boost::function<BUF_QObj (void)> CallbackT_pro;
typedef boost::function<void (BUF_QObj)> CallbackT_con;
typedef boost::shared_ptr<BufferAccessor> buf_ptr;
// Register the worker object
int register_consumer(BufferAccesser &accessor) {
mRegCons[mNumConsumers] = buf_ptr(accessor);
return ++mNumConsumers;
}
int register_producer(BufferAccesser &accessor) {
mRegPros[mNumProducers] = buf_ptr(accessor);
return ++mNumProducers;
}
// Dispatch consumer threads
for(;x<mNumConsumers; ++x) {
CallBack_Tcon callback_con = boost::bind(&BufferAccessor::do_work, mRegCons[x]);
tw = new boost:thread(boost::bind(&RT_ActiveCircularBuffer::consumerWorker, this, callback_con));
consumers.add(tw);
}
// Dispatch producer threads
for(x=0;x<mNumProducers; ++x) {
CallBack_Tpro callback_pro = boost::bind(&BufferAccessor::do_work, mRegPros[x], _1);
tw = new boost:thread(boost::bind(&RT_ActiveCircularBuffer::producerWorker, this, callback_pro));
producers.add(tw);
}
// Thread Template Workers - Consumer
void consumerWorker(CallbackT_con worker) {
struct BUF_QObj *qData;
while(!mRun)
cond.wait(mLock);
while(!mTerminate) {
// Set interruption point so that thread can be interrupted
boost::thread::interruption_point();
{ // Code Block
boost::mutex::scoped_lock lock(mLock);
if(buf.empty()) {
cond.wait(mLock)
qData = mBuf.front();
mBuf.pop_front(); // remove the front element
} // End Code Block
worker(qData); // Process data
// Sleep that thread for 1 uSec
boost::thread::sleep(boost::posix_time::nanoseconds(1000));
} // End of while loop
}
// Thread Template Workers - Producer
void producerWorker(CallbackT_pro worker) {
struct BUF_QObj *qData;
boost::thread::sleep(boost::posix_time::nanoseconds(1000));
while(!mRun)
cond.wait(mLock);
while(!mTerminate) {
// Set interruption point so that thread can be interrupted
boost::thread::interruption_point();
qData = worker(); // get data to be processed
{ // Code Block
boost::mutex::scoped_lock lock(mLock);
buf.push_back(qData);
cond.notify_one(mLock);
} // End Code Block
// Sleep that thread for 1 uSec
boost::thread::sleep(boost::posix_time::nanoseconds(1000));
} // End of while loop
}
No it's not, because STL containers are homogenous, and access1 and access2 have completely different unrelated types. But you could make the class BufferAccessor non-template one but the do-work member as a template, like this:
class BufferAccessor
{
template<class R, class A>
R doWork(A arg) {...}
};
In this case you could store BufferAccessors in a container, but you can't make a member template function virtual.
Yes, you can use vector<BufferAccessor<void,int> > to store BufferAccessor<void,int> objects and vector<BufferAccessor<int,void> > to store BufferAccessor<int,void> objects.
What you cant do is use same vector to store both BufferAccessor<int,void> and BufferAccessor<void,int> object
The reason it doesnt work is because BufferAccessor<void,int>, and BufferAccessor<int,void> are two different classes
Note: it is possible to use same vector to store both BufferAccessor<int,void> and BufferAccessor<void,int> but you would have to either store them as void * using shared_ptr<void>. Or better yet you can use a boost::variant