I found a threadpool which doesn't seem to be in boost yet, but I may be able to use it for now (unless there is a better solution).
I have several million small tasks that I want to execute concurrently and I wanted to use a threadpool to schedule the execution of the tasks. The documentation of the threadpool provides (roughly) this example:
#include "threadpool.hpp"
using namespace boost::threadpool;
// A short task
void task()
{
// do some work
}
void execute_with_threadpool(int poolSize, int numTasks)
{
// Create a thread pool.
pool tp(poolSize);
for(int i = 0; i++; i < numTasks)
{
// Add some tasks to the pool.
tp.schedule(&task);
}
// Leave this function and wait until all tasks are finished.
}
However, the example only allows me to schedule non-member functions (or tasks). Is there a way that I can schedule a member function for execution?
Update:
OK, supposedly the library allows you to schedule a Runnable for execution, but I can't figure out where is the Runnable class that I'm supposed to inherit from.
template<typename Pool, typename Runnable>
bool schedule(Pool& pool, shared_ptr<Runnable> const & obj);
Update2:
I think I found out what I need to do: I have to make a runnable which will take any parameters that would be necessary (including a reference to the object that has a function which will be called), then I use the static schedule function to schedule the runnable on the given threadpool:
class Runnable
{
private:
MyClass* _target;
Data* _data;
public:
Runnable(MyClass* target, Data* data)
{
_target = target;
_data = data;
}
~Runnable(){}
void run()
{
_target->doWork(_data);
}
};
Here is how I schedule it within MyClass:
void MyClass::doWork(Data* data)
{
// do the work
}
void MyClass::produce()
{
boost::threadpool::schedule(myThreadPool, boost::shared_ptr<Runnable>(new Runnable(myTarget, new Data())));
}
However, the adaptor from the library has a bug in it:
template<typename Pool, typename Runnable>
bool schedule(Pool& pool, shared_ptr<Runnable> const & obj)
{
return pool->schedule(bind(&Runnable::run, obj));
}
Note that it takes a reference to a Pool but it tries to call it as if it was a pointer to a Pool, so I had to fix that too (just changing the -> to a .).
To schedule any function or member function - use Boost.Bind or Boost.Lambda (in this order). Also you can consider special libraries for your situation. I can recommend Inter Threading Building Blocks or, in case you use VC2010 - Microsoft Parallel Patterns Library.
EDIT:
I've never used this library or heard anything bad about it, but it's old enough and still is not included into Boost. I would check why.
EDIT 2:
Another option - Boost.Asio. It's primarily a networking library, but it has a scheduler that you can use. I would use this multithreading approach. Just instead of using asynchronous network operations schedule your tasks by boost::asio::io_service::post().
I think I found out what I need to do: I have to make a runnable which will take any parameters that would be necessary (including a reference to the object that has a function which will be called), then I use the static schedule function to schedule the runnable on the given threadpool:
class Runnable
{
private:
MyClass* _target;
Data* _data;
public:
Runnable(MyClass* target, Data* data)
{
_target = target;
_data = data;
}
~Runnable(){}
void run()
{
_target->doWork(_data);
}
};
Here is how I schedule it within MyClass:
void MyClass::doWork(Data* data)
{
// do the work
}
void MyClass::produce()
{
boost::threadpool::schedule(myThreadPool, boost::shared_ptr<Runnable>(new Runnable(myTarget, new Data())));
}
However, the adaptor from the library has a bug in it:
template<typename Pool, typename Runnable>
bool schedule(Pool& pool, shared_ptr<Runnable> const & obj)
{
return pool->schedule(bind(&Runnable::run, obj));
}
Note that it takes a reference to a Pool but it tries to call it as if it was a pointer to a Pool, so I had to fix that too (just changing the -> to a .).
However, as it turns out, I can't use that boost thread pool because I am mixing native C++ (dll), C++/CLI (dll) and .NET code: I have a C++/CLI library that wraps a native C++ library which in tern uses boost::thread. Unfortunately, that results in a BadImageFormatException at runtime (which has previously been discussed by other people):
The problem is that the static boost thread library tries to hook the
native win32 PE TLS callbacks in order to ensure that the thread-local
data used by boost thread is cleaned up correctly. This is not
compatible with a C++/CLI executable.
This solution is what I was able to implement using the information: http://think-async.com/Asio/Recipes. I tried implementing this recipe and found that the code worked in Windows but not in Linux. I was unable to figure out the problem but searching the internet found the key which was make the work object an auto pointer within the code block. I've include the void task() that the user wanted for my example I was able to create a convenience function and pass pointers into my function does the work. For my case, I create a thread pool that uses the function : boost::thread::hardware_concurrency() to get the possible number of threads. I've used the recipe below with as many as 80 tasks with 15 threads.
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/scoped_ptr.hpp>
// A short task
void task()
{
// do some work
}
void execute_with_threadpool( int numTasks,
int poolSize = boost::thread::hardware_concurrency() )
{
boost::asio::io_service io_service;
boost::thread_group threads;
{
boost::scoped_ptr< boost::asio::io_service::work > work( new boost::asio::io_service::work(io_service) );
for(int t = 0; t < poolSize; t++)
{
threads.create_thread(boost::bind(&boost::asio::io_service::run, &io_service));
}
for( size_t t = 0; t < numTasks; t++ )
{
++_number_of_jobs;
io_service.post(boost::bind(task) );
}
}
threads.join_all();
}
Figured it out, you must have run() method defined, this is the easiest way:
class Command
{
public:
Command() {}
~Command() {}
void run() {}
};
In main(), tp is your threadpool:
shared_ptr<Command> pc(new Command());
tp.schedule(bind(&Command::run, pc));
Done.
Related
I'm working on a large code base that, for performance reasons, limits access to one or more resources. A thread pool is a good analogy to my problem - we don't want everyone in the process spinning up their own threads, so a common pool with a producer/consumer job queue exists in an attempt to limit the number of threads running at any given time.
There isn't an elegant way to make ownership of the thread pool clear so, for all intents and purposes, it is a singleton. I speak better in code than in English, so here is an example:
class ThreadPool {
public:
static void SubmitTask(Task&& t) { instance_.SubmitTask(std::move(t)); }
private:
~ThreadPool() {
std::for_each(pool_.begin(), pool_.end(), [](auto &t) {
if (t.joinable()) t.join();
});
}
private:
std::array<std::thread, 5> pool_;
static ThreadPool instance_; // here or anonymous namespace
};
The issue with this pattern is instance_ doesn't go out of scope until after main has returned which typically results in races or crashes. Also, keep in mind this is analogous to my problem so better ways to do something asynchronously isn't really what I'm after; just better ways to manage the lifecycle of static objects.
Alternatives I've thought of:
Provide an explicit Terminate function that must be called manually before leaving main.
Not using statics at all and leaving it up to the app to ensure only a single instance exists.
Not using statics at all and crashing the app if more than 1 instance is instantiated.
I also realize that a small, sharp, team could probably make the above code work just fine. However, this code lives within a large organization that has many developers of various skill levels contributing to it.
You could explicitly bind the lifetime to your main function. Either add a static shutdown() method to your ThreadPool that does any cleanup you need and call it at the end of main().
Or fully bind the lifetime via RAII:
class ThreadPool {
public:
static ThreadPool* get() { return instance_.get(); }
void SubmitTask(Task&& t) { ... }
~ThreadPool() { ... }
private:
ThreadPool() {}
static inline std::unique_ptr<ThreadPool> instance_;
friend class ThreadPoolScope;
};
class ThreadPoolScope {
public:
ThreadPoolScope(){
assert(!ThreadPool::instance_);
ThreadPool::instance_.reset(new ThreadPool());
}
~ThreadPoolScope(){
ThreadPool::instance_.reset();
}
};
int main() {
ThreadPoolScope thread_pool_scope{};
...
}
void some_func() {
ThreadPool::get()->SubmitTask(...);
}
This makes destruction completely deterministic and if you do this with multiple objects, they are automatically destroyed in the correct order.
I have a thread pool that I use to execute many tiny jobs (millions of jobs, dozens/hundreds of milliseconds each). The jobs are passed in the form of either:
std::bind(&fn, arg1, arg2, arg3...)
or
[&](){fn(arg1, arg2, arg3...);}
with the thread pool taking them like this:
std::queue<std::function<void(void)>> queue;
void addJob(std::function<void(void)> fn)
{
queue.emplace_back(std::move(fn));
}
Pretty standard stuff....except that I've noticed a bottleneck where if jobs execute in a fast enough time (less than a millisecond), the conversion from lambda/binder to std::function in the addJob function actually takes longer than execution of the jobs themselves. After doing some reading, std::function is notoriously slow and so my bottleneck isn't necessarily unexpected.
Is there a faster way of doing this type of thing? I've looked into drop-in std::function replacements but they either weren't compatible with my compiler or weren't faster. I've also looked into "fast delegates" by Don Clugston but they don't seem to allow the passing of arguments along with functions (maybe I don't understand them correctly?).
I'm compiling with VS2015u3, and the functions passed to the jobs are all static, with their arguments being either ints/floats or pointers to other objects.
Have a separate queue for each of the task types - you probably don't have tens of thousands of task types. Each of these can be e.g. a static member of your tasks. Then addJob() is actually the ctor of Task and it's perfectly type-safe.
Then define a compile-time list of your task types and visit it via template metaprogramming (for_each). It'll be way faster as you don't need any virtual call fnptr / std::function<> to achieve this.
This will only work if your tuple code sees all the Task classes (so you can't e.g. add a new descendant of Task to an already running executable by loading the image from disc - hope that's a non-issue).
template<typename D> // CRTP on D
class Task {
public:
// you might want to static_assert at some point that D is in TaskTypeList
Task() : it_(tasks_.end()) {} // call enqueue() in descendant
~Task() {
// add your favorite lock here
if (queued()) {
tasks_.erase(it_);
}
}
bool queued() const { return it_ != tasks_.end(); }
static size_t ExecNext() {
if (!tasks_.empty()) {
// add your favorite lock here
auto&& itTask = tasks_.begin();
tasks_.pop_front();
// release lock
(*itTask)();
itTask->it_ = tasks_.end();
}
return tasks_.size();
}
protected:
void enqueue() const
{
// add your favorite lock here
tasks_.push_back(static_cast<D*>(this));
it_ = tasks_.rbegin();
}
private:
std::list<D*>::iterator it_;
static std::list<D*> tasks_; // you can have one per thread, too - then you don't need locking, but tasks are assigned to threads statically
};
struct MyTask : Task<MyTask> {
MyTask() { enqueue(); } // call enqueue only when the class is ready
void operator()() { /* add task here */ }
// ...
};
struct MyTask2; // etc.
template<typename...>
struct list_ {};
using TaskTypeList = list_<MyTask, MyTask2>;
void thread_pocess(list_<>) {}
template<typename TaskType, typename... TaskTypes>
void thread_pocess(list_<TaskType, TaskTypes...>)
{
TaskType::ExecNext();
thread_process(list_<TaskTypes...>());
}
void thread_process(void*)
{
for (;;) {
thread_process(TaskTypeList());
}
}
There's a lot to tune on this code: different threads should start from different parts of the queue (or one would use a ring, or several queues and either static/dynamic assignment to threads), you'd send it to sleep when there are absolutely no tasks, one could have an enum for the tasks, etc.
Note that this can't be used with arbitrary lambdas: you need to list task types. You need to 'communicate' the lambda type out of the function where you declare it (e.g. by returning `std::make_pair(retval, list_) and sometimes it's not easy. However, you can always convert a lambda to a functor, which is straightforward - just ugly.
Recently I'm thinking a high performance event-driven multi-threads framework using c++11. And it mainly takes c++11 facilities such as std::thread, std::condition_variable, std::mutex, std::shared_ptr etc into consideration. In general, this framework has three basic components: job, worker and streamline, well, it seems to be a real factory. When user construct his business model in server end, he just needs to consider the data and its processor. Once the model is established, user only needs to construct data class inherited job and processor class inherited worker.
For example:
class Data : public job {};
class Processsor : public worker {};
When server get data, it just new a Data object through auto data = std::make_shared<Data>() in the data source callback thread and call the streamline. job_dispatch to transfer the processor and data to other thread. Of course user doesn't have to think to free memory. The streamline. job_dispatch mainly do below stuff:
void evd_thread_pool::job_dispatch(std::shared_ptr<evd_thread_job> job) {
auto task = std::make_shared<evd_task_wrap>(job);
task->worker = streamline.worker;
// worker has been registered in streamline first of all
{
std::unique_lock<std::mutex> lck(streamline.mutex);
streamline.task_list.push_back(std::move(task));
}
streamline.cv.notify_all();
}
The evd_task_wrap used in the job_dispatch defined as:
struct evd_task_wrap {
std::shared_ptr<evd_thread_job> order;
std::shared_ptr<evd_thread_processor> worker;
evd_task_wrap(std::shared_ptr<evd_thread_job>& o)
:order(o) {}
};
Finally the task_wrap will be dispatched into the processing thread through task_list that is a std::list object. And the processing thread mainly do the stuff as:
void evd_factory_impl::thread_proc() {
std::shared_ptr<evd_task_wrap> wrap = nullptr;
while (true) {
{
std::unique_lock<std::mutex> lck(streamline.mutex);
if (streamline.task_list.empty())
streamline.cv.wait(lck,
[&]()->bool{return !streamline.task_list.empty();});
wrap = std::move(streamline.task_list.front());
streamline.task_list.pop_front();
}
if (-1 == wrap->order->get_type())
break;
wrap->worker->process_task(wrap->order);
wrap.reset();
}
}
But I don't know why the process will often crash in the thread_proc function. And the coredump prompt that sometimes the wrap is a empty shared_ptr or segment fault happened in _Sp_counted_ptr_inplace::_M_dispose that is called in wrap.reset(). And I supposed the shared_ptr has the thread synchronous problem in this scenario while I know the control block in shared_ptr is thread-safety. And of course the shared_ptr in job_dispatch and thread_proc is different shared_ptr object even though they point to the same storage. Does anyone has more specific suggestion on how to solve this problem? Or if there exists similar lightweight framework with automatic memory management using c++11
The example of process_task such as:
void log_handle::process_task(std::shared_ptr<crx::evd_thread_job> job) {
auto j = std::dynamic_pointer_cast<log_job>(job);
j->log->Printf(0, j->print_str.c_str());
write(STDOUT_FILENO, j->print_str.c_str(), j->print_str.size());
}
class log_factory {
public:
log_factory(const std::string& name);
virtual ~log_factory();
void print_ts(const char *format, ...) { //here dispatch the job
char log_buf[4096] = {0};
va_list args;
va_start(args, format);
vsprintf(log_buf, format, args);
va_end(args);
auto job = std::make_shared<log_job>(log_buf, &m_log);
m_log_th.job_dispatch(job);
}
public:
E15_Log m_log;
std::shared_ptr<log_handle> m_log_handle;
crx::evd_thread_pool m_log_th;
};
I detected a problem in your code, which may or may not be related:
You use notify_all from your condition variable. That will awaken ALL threads from sleep. It is OK if you wrap your wait in a while loop, like:
while (streamline.task_list.empty())
streamline.cv.wait(lck, [&]()->bool{return !streamline.task_list.empty();});
But since you are using an if, all threads leave the wait. If you dispatch a single product and having several consumer threads, all but one thread will call wrap = std::move(streamline.task_list.front()); while the tasklist is empty and cause UB.
I need your help with wxWidgets. I have 2 threads (1 wxTimer and 1 wxThread), I need communicate between this 2 threads. I have a class that contains methods to read/write variable in this class. (Share Memory with this object)
My problem is: I instanciate with "new" this class in one thread but I don't know that necessary in second thread. Because if instanciate too, adress of variable are differents and I need communicate so I need even value in variable :/
I know about need wxSemaphore to prevent error when to access same time.
Thanks you for your help !
EDIT: My code
So, I need make a link with my code. Thanks you for all ;)
It's my declaration for my wxTimer in my class: EvtFramePrincipal (IHM)
In .h
EvtFramePrincipal( wxWindow* parent );
#include <wx/timer.h>
wxTimer m_timer;
in .cpp -Constructor EvtFramePrincipal
EvtFramePrincipal::EvtFramePrincipal( wxWindow* parent )
:
FramePrincipal( parent ),m_timer(this)
{
Connect(wxID_ANY,wxEVT_TIMER,wxTimerEventHandler(EvtFramePrincipal::OnTimer),NULL,this);
m_timer.Start(250);
}
So I call OnTimer method every 250ms with this line.
For my second thread start from EvtFramePrincipal (IHM):
in .h EvtFramePrincipal
#include "../Client.h"
Client *ClientIdle;
in .cpp EvtFramePrincipal
ClientIdle= new Client();
ClientIdle->Run();
In .h Client (Thread)
class Client: public wxThread
public:
Client();
virtual void *Entry();
virtual void OnExit();
In .cpp Client (Thread)
Client::Client() : wxThread()
{
}
So here, no probleme, thread are ok ?
Now I need that this class that use like a messenger between my 2 threads.
#ifndef PARTAGE_H
#define PARTAGE_H
#include "wx/string.h"
#include <iostream>
using std::cout;
using std::endl;
class Partage
{
public:
Partage();
virtual ~Partage();
bool Return_Capteur_Aval()
{ return Etat_Capteur_Aval; }
bool Return_Capteur_Amont()
{ return Etat_Capteur_Amont; }
bool Return_Etat_Barriere()
{ return Etat_Barriere; }
bool Return_Ouverture()
{ return Demande_Ouverture; }
bool Return_Fermeture()
{ return Demande_Fermeture; }
bool Return_Appel()
{ return Appel_Gardien; }
void Set_Ouverture(bool Etat)
{ Demande_Ouverture=Etat; }
void Set_Fermeture(bool Etat)
{ Demande_Fermeture=Etat; }
void Set_Capteur_Aval(bool Etat)
{ Etat_Capteur_Aval=Etat; }
void Set_Capteur_Amont(bool Etat)
{ Etat_Capteur_Amont=Etat; }
void Set_Barriere(bool Etat)
{ Etat_Barriere=Etat; }
void Set_Appel(bool Etat)
{ Appel_Gardien=Etat; }
void Set_Code(wxString valeur_code)
{ Code=valeur_code; }
void Set_Badge(wxString numero_badge)
{ Badge=numero_badge; }
void Set_Message(wxString message)
{
Message_Affiche=wxT("");
Message_Affiche=message;
}
wxString Get_Message()
{
return Message_Affiche;
}
wxString Get_Code()
{ return Code; }
wxString Get_Badge()
{ return Badge; }
protected:
private:
bool Etat_Capteur_Aval;
bool Etat_Capteur_Amont;
bool Etat_Barriere;
bool Demande_Ouverture;
bool Demande_Fermeture;
bool Appel_Gardien;
wxString Code;
wxString Badge;
wxString Message_Affiche;
};
#endif // PARTAGE_H
So in my EvtFramePrincipal(wxTimer), I make a new for this class. But in other thread (wxThread), what I need to do to communicate ?
If difficult to understand so sorry :/
Then main thread should create first the shared variable. After it, you can create both threads and pass them a pointer to the shared variable.
So, both of them, know how interact with the shared variable. You need to implement a mutex or wxSemaphore in the methods of the shared variable.
You can use a singleton to get access to a central object.
Alternatively, create the central object before creating the threads and pass the reference to the central object to threads.
Use a mutex in the central object to prevent simultaneous access.
Creating one central object on each thread is not an option.
EDIT 1: Adding more details and examples
Let's start with some assumptions. The OP indicated that
I have 2 threads (1 wxTimer and 1 wxThread)
To tell the truth, I know very little of the wxWidgets framework, but there's always the documentation. So I can see that:
wxTimer provides a Timer that will execute the wxTimer::Notify() method when the timer expires. The documentation doesn't say anything about thread-execution (although there's a note A timer can only be used from the main thread which I'm not sure how to understand). I can guess that we should expect the Notify method will be executed in some event-loop or timer-loop thread or threads.
wxThread provides a model for Thread execution, that runs the wxThread::Entry() method. Running a wxThread object will actually create a thread that runs the Entry method.
So your problem is that you need same object to be accessible in both wxTimer::Notify() and wxThread::Entry() methods.
This object:
It's not one variable but a lot of that store in one class
e.g.
struct SharedData {
// NOTE: This is very simplistic.
// since the information here will be modified/read by
// multiple threads, it should be protected by one or more
// mutexes
// so probably a class with getter/setters will be better suited
// so that access with mutexes can be enforced within the class.
SharedData():var2(0) { }
std::string var1;
int var2;
};
of which you have somewhere an instance of that:
std::shared_ptr<SharedData> myData=std::make_shared<SharedData>();
or perhaps in pointer form or perhaps as a local variable or object attribute
Option 1: a shared reference
You're not really using wxTimer or wxThread, but classes that inherit from them (at least the wxThread::Entry() is pure virtual. In the case of wxTimer you could change the owner to a different wxEvtHandler that will receive the event, but you still need to provide an implementation.
So you can have
class MyTimer: public wxTimer {
public:
void Notify() {
// Your code goes here
// but it can access data through the local reference
}
void setData(const std::shared_ptr<SharedData> &data) {
mLocalReference=data
}
private:
std::shared_ptr<SharedData> mLocalReferece
};
That will need to be set:
MyTimer timer;
timer.setData(myData);
timer.StartOnece(10000); // wake me up in 10 secs.
Similarly for the Thread
class MyThread: public wxThread {
public:
void Entry() {
// Your code goes here
// but it can access data through the local reference
}
void setData(const std::shared_ptr<SharedData> &data) {
mLocalReference=data
}
private:
std::shared_ptr<SharedData> mLocalReferece
};
That will need to be set:
MyThread *thread=new MyThread();
thread->setData(myData);
thread->Run(); // threads starts running.
Option2 Using a singleton.
Sometimes you cannot modify MyThread or MyTimer... or it is too difficult to route the reference to myData to the thread or timer instances... or you're just too lazy or too busy to bother (beware of your technical debt!!!)
We can tweak the SharedData into:
struct SharedData {
std::string var1;
int var2;
static SharedData *instance() {
// NOTE that some mutexes are needed here
// to prevent the case where first initialization
// is executed simultaneously from different threads
// allocating two objects, one of them leaked.
if(!sInstance) {
sInstance=new SharedData();
}
return sInstance
}
private:
SharedData():var2(0) { } // Note we've made the constructor private
static SharedData *sInstance=0;
};
This object (because it only allows the creation of a single object) can be accessed from
either MyTimer::Notify() or MyThread::Entry() with
SharedData::instance()->var1;
Interlude: why Singletons are evil
(or why the easy solution might bite you in the future).
What is so bad about singletons?
Why Singletons are Evil
Singletons Are Evil
My main reasons are:
There's one and only one instance... and you might think that you only need one now, but who knows what the future will hold, you've taken an easy solution for a coding problem that has far reaching consequences architecturally and that might be difficult to revert.
It will not allow doing dependency injection (because the actual class is used in the accessing the object).
Still, I don't think is something to completely avoid. It has its uses, it can solve your problem and it might save your day.
Option 3. Some middle ground.
You could still organize your data around a central repository with methods to access different instances (or different implementations) of the data.
This central repository can be a singleton (it is really is central, common and unique), but is not the shared data, but what is used to retrieve the shared data, e.g. identified by some ID (that might be easier to share between the threads using option 1)
Something like:
CentralRepository::instance()->getDataById(sharedId)->var1;
EDIT 2: Comments after OP posted (more) code ;)
It seems that your object EvtFramePrincipal will execute both the timer call back and it will contain the ClientIdle pointer to a Client object (the thread)... I'd do:
Make the Client class contain a Portage attribute (a pointer or a smart pointer).
Make the EvtFramePrincipal contain a Portage attribute (a pointer or smart pointer). I guess this will have the lifecycle of the whole application, so the Portage object can share that lifecycle too.
Add Mutexes locking to all methods setting and getting in the Portage attribute, since it can be accessed from multiple threads.
After the Client object is instantiated set the reference to the Portage object that the EvtFramePrincipal contains.
Client can access Portage because we've set its reference when it was created. When the Entry method is run in its thread it will be able to access it.
EvtFramePrincipal can access the Portage (because it is one of its attributes), so the event handler for the timer event will be able to access it.
I've recently started using lambdas an awful lot within threads, and want to make sure I'm not setting myself up for thread-safety issues/crashes later. My usual way of using them is:
class SomeClass {
int someid;
void NextCommand();
std::function<void(int, int)> StoreNumbers;
SomeClass(id, fn); // constructor sets id and storenumbers fn
}
// Called from multiple threads
static void read_callback(int fd, void* ptr)
{
SomeClass* sc = static_cast<SomeClass*>ptr;
..
sc->StoreNumbers(someint,someotherint); // voila, thread specific storage.
}
static DWORD WINAPI ThreadFn(LPVOID param)
{
std::list<int> ints1;
std::list<int> ints2;
auto storenumbers = [&] (int i, int i2) {
// thread specific lambda.
ints1.push_back(i);
ints2.push_back(i2);
};
SomeClass s(id, storenumbers);
...
// set up something that eventually calls read_callback with s set as the ptr.
}
ThreadFn is used as the thread function for 30-40 threads.
Is this acceptable? I usually have a few of these thread-specific lambdas that operate on a bunch of thread specific data.
Thank you!
There's no problem here. A data access with a lambda is no different to a data access with a named function, through inline code, a traditional functor, one made with bind, or any other way. As long as that lambda is invoked from only one thread at a time, I don't see any evidence of thread-related problems.