Multithreading with member functions and constructor carrying argument(s) - c++

I have a situation in which i need to instantiate a vector of boost::threads to solve the following:
I have a class called Instrument to hold Symbol information, which looks something like below:
class Instrument
{
public:
Instrument(StringVector symbols, int i);
virtual ~Instrument();
const Instrument& operator= (const Instrument& inst)
{
return *this;
}
String GetSymbol() { return Symbol_; }
LongToSymbolInfoPairVector GetTS() { return TS_; }
bool OrganiseData(TimeToSymbolsInfoPairVector& input, int i);
static int getRandomNumber(const int low, const int high);
static double getProbability();
bool ConstructNewTimeSeries(const int low, const int high);
bool ReconstructTimeSeries(TimeToSymbolsInfoPairVector& reconstructeddata, int i);
private:
LongToSymbolInfoPairVector TS_;
String Symbol_;
const int checkWindow_;
String start_, end_;
long numberofsecsinaday_;
static std::default_random_engine generator_;
};
This class will have as many objects as the number of symbols. These symbols shall be accessed in another class Analysis for further work, whose constructor accepts the vector of the above Instrument class, as shown below.
class Analysis
{
public:
Analysis(std::vector<Instrument>::iterator start, std::vector<Instrument>::iterator end);
virtual ~Analysis();
bool buildNewTimeSeries(TimeToSymbolsInfoPairVector& reconstructeddata);
bool printData(TimeToSymbolsInfoPairVector& reconstructeddata);
private:
std::vector<Instrument> Instruments_;
};
Now i want to multithread this process so that i can separate out say 7 symbols per thread and spawn out, say, 4 threads.
Following is the updated main.
std::vector<Instrument>::iterator block_start = Instruments.begin();
int first = 0, last = 0;
for (unsigned long i=0; i<MAX_THREADS; i++)
{
std::vector<Instrument>::iterator block_end = block_start;
std::advance(block_end, block_size);
last = (i+1)*block_size;
Analysis* analyzed = new Analysis(block_start, block_end /*first, last*/);
analyzed->setData(output, first, last);
threads.push_back(std::thread(std::bind(&Analysis::buildNewTimeSeries, std::ref(*analyzed))));
block_start = block_end;
first = last;
}
for (int i=0; i<MAX_THREADS; i++)
{
(threads[i]).join();
}
This is evidently incorrect, although i know how to instantiate a thread's constructor to pass a class constructor an argument or a member function an argument, but i seem to be facing an issue when my purpose is:
a) Pass the constructor of class Analysis a subset of vector and
b) Call the buildNewTimeSeries(TimeToSymbolsInfoPairVector& reconstructeddata)
for each of the 4 threads and then later on join them.
Can anyone suggest a neat way of doing this please ?

The best way to go about partitioning a vector of resources (like std::vector in ur case) on to limited number of threads is by using a multi-threaded design paradigm called threadpools. There is no standard thread-pool in c++ and hence you might have to build one yourself(or use open source libraries). You can have a look at one of the many good opensource implementations here:- https://github.com/progschj/ThreadPool
Now, I am not going to be using threadpools, but will just give you a couple of suggestions to help u fix ur problem without modifying ur core functionality/idea.
In main you are dynamically creating vectors using new and are passing on the reference of the vector by dereferencing the pointer. Analysis* analyzed = new. I understand that your idea here, is to use the same vector analysis* in both main and the thread function. In my opinion this is not a good design. There is a better way to do it.
Instead of using std::thread use std::async. std::async creates tasks as opposed to threads. There are numerous advantages using tasks by using async. I do not want to make this a long answer by describing thread/tasks. But, one main advantage of tasks which directly helps you in your case is that it lets you return values(called future) from tasks back to the main function.
No to rewrite your main function async, tweak your code as follows,
Do not dynamically create a vector using new, instead just create a
local vector and just move the vector using std::move to the task
while calling async.
Modify Analysis::buildNewTimeSeries to accept rvalue reference.
Write a constructor for analysis with rvalue vector
The task will then modify this vector locally and then
return this vector to main function.
while calling async store the return value of the async
calls in a vector < future < objectType > >
After launching all the tasks using async, you can call the .get() on each of the element of this future vector.
This .get() method will return the vector modified and returned from
thread.
Merge these returned vectors into the final result vector.
By moving the vector from main to thread and then returning it back, you are allowing only one owner to have exclusive access on the vector. So you can not access a vector from main after it gets moved to thread. This is in contrast to your implementation, where both the main function and the thread function can access the newly created vector that gets passed by reference to the thread.

Related

how to move unique_ptr object between two STL containers [duplicate]

This question already has answers here:
Move out element of std priority_queue in C++11
(6 answers)
Closed 2 years ago.
#include <utility>
#include<unordered_map>
#include<queue>
using namespace std;
struct Task {
char t;
int cnt;
Task(char ch, int cnt):t(ch), cnt(cnt){};
};
struct CompTask {
bool operator() (const unique_ptr<Task>& a, const unique_ptr<Task>& b) {
return a->cnt < b->cnt;
}
};
class Schedule {
public:
int schedule() {
unordered_map<unique_ptr<Task>, int> sleep_q;
priority_queue<unique_ptr<Task>, vector<unique_ptr<Task>>, CompTask> ready_q; // max heap
ready_q.push(std::make_unique<Task>('A', 1));
auto& ptr = ready_q.top();
//sleep_q.insert({ptr, 1}); // compile error
sleep_q.insert({std::move(ptr), 1}); // compile error
// some other code...
return 1;
}
};
int main() {
return 0;
}
// error:
cpp:38:17: error: no matching member function for call to 'insert'
sleep_q.insert({std::move(ptr), 1}); // compile error
~~~~~~~~^~~~~~
Programming context:
I had a task class and the program attempts to simulate the task scheduling (which involves moving a task back and forth between a ready queue and a sleep queue).
I have two std containers for ready queue and sleep queue respectively, the priority_queue has value type unique_ptr<Task>, the other is
unorder_map (sleep queue) whose key is also unique_ptr<Task>. I had trouble moving the unique_ptr object from priorty_queue to unordered_map (shown in the code).
My questions are:
(1) how to insert an item into the unordered_map, I had compilation errors on doing that.
(2) in the problem context, which type of "pointer" would be preferred? unique_ptr<Task>, shared_ptr<Task>, or just Task*
There seems to be lack of move functionality out of std::priority_queue<>. You can go around it using const_cast, though in rare cases it might cause undefined behavior (when stored element type is const).
int schedule() {
unordered_map<unique_ptr<Task>, int> sleep_q;
priority_queue<unique_ptr<Task>, vector<unique_ptr<Task>>, CompTask> ready_q; // max heap
ready_q.push(std::make_unique<Task>('A', 1));
unique_ptr<Task> ptr =
std::move(const_cast<unique_ptr<Task>&>(ready_q.top()));
// ^ Here. priority_queue::top() returns `const&`.
ready_q.pop(); // remove moved element from priority_queue
sleep_q.insert(std::make_pair(std::move(ptr), 1));
// some other code...
return 1;
}
Reference: This answer
To move a unique_ptr between std containers requires destroying one unique_ptr (stored within the first std container) and passing the backing data to a new unique_ptr. This can be done with std::move. However, it's usually easier to use a shared_ptr. Which avoids the problem by allowing shared_ptr point to the same task to be in two std containers simultaneously. Even if just for a moment.
However, since you could have multiple shared_ptr objects pointing the same task the shared_ptr wouldn't work to uniquely identify the task. To do that, I would recommend creating a Task Id for each task. A simple integer would work for this, as long as you guarantee it's unique to that task. An integer task Id would work well with a map since the Id could be used as the key.

C++ safe idiom to call a member function of a class through a shared_ptr class member

Problem description
In designing an observer pattern for my code, I encountered the following task: I have a class Observer which contains a variable std::shared_ptr<Receiver> and I want to use a weak_ptr<Receiver> to this shared-pointer to safely call a function update() in Observer (for a more detailed motivation including some profiling measurements, see the EDIT below).
Here is an example code:
struct Receiver
{
void call_update_in_observer() { /* how to implement this function? */}
};
struct Observer
{
virtual void update() = 0;
std::shared_ptr<Receiver> receiver;
};
As mentioned there is a weak_ptr<Receiver> from which I want to call Observer::update() -- at most once -- via Receiver::call_update_in_observer():
Observer observer;
std::weak_ptr<Receiver> w (observer.receiver);
auto s = w.lock();
if(s)
{
s->call_update_in_observer(); //this shall call at most once Observer::update()
//regardless how many copies of observer there are
}
(Fyi: the call of update() should happen at most once because it updates a shared_ptr in some derived class which is the actual observer. However, whether it is called once or more often does not affect the question about "safeness" imo.)
Question:
What is an appropriate implementation of Observer and Receiver to carry out that process in a safe manner?
Solution attempt
Here is an attempt for a minimal implementation -- the idea is that Receiver manages a set of currently valid Observer objects, of which one member is called:
struct Receiver
{
std::set<Observer *> obs;
void call_update_in_observer() const
{
for(auto& o : obs)
{
o->update();
break; //one call is sufficient
}
}
};
The class Observer has to take care that the std::shared_ptr<Receiver> object is up-to-date:
struct Observer
{
Observer()
{
receiver->obs.insert(this);
}
Observer(Observer const& other) : receiver(other.receiver)
{
receiver->obs.insert(this);
}
Observer& operator=(Observer rhs)
{
std::swap(*this, rhs);
return *this;
}
~Observer()
{
receiver->obs.erase(this);
}
virtual void update() = 0;
std::shared_ptr<Receiver> receiver = std::make_shared<Receiver>();
};
DEMO
Questions:
Is this already safe? -- "safe" meaning that no expired Foo object is called. Or are there some pitfalls which have to be
considered?
If this code is safe, how would one implement the move constructor and assignment?
(I know this has the feeling of being appropriate for CodeReview, but it's rather about a reasonable pattern for this task than about my code, so I posted it here ... and further the move constructors are still missing.)
EDIT: Motivation
As the above requirements have been called "confusing" in the comments (which I can't deny), here is the motivation: Consider a custom Vector class which in order to save memory performs shallow copies:
struct Vector
{
auto operator[](int i) const { return v[i]; }
std::shared_ptr<std::vector<double> > v;
};
Next one has expression template classes e.g. for the sum of two vectors:
template<typename _VectorType1, typename _VectorType2>
struct VectorSum
{
using VectorType1 = std::decay_t<_VectorType1>;
using VectorType2 = std::decay_t<_VectorType2>;
//alternative 1: store by value
VectorType1 v1;
VectorType2 v2;
//alternative 2: store by shared_ptr
std::shared_ptr<VectorType1> v1;
std::shared_ptr<VectorType2> v2;
auto operator[](int i) const
{
return v1[i] + v2[i];
}
};
//next overload operator+ etc.
According to my measurements, alternative 1 where one stores the vector expressions by value (instead of by shared-pointer) is faster by a factor of two in Visual Studio 2015. In a simple test on Coliru, the speed improvement is even a factor of six:
type Average access time ratio
--------------------------------------------------------------
Foo : 2.81e-05 100%
std::shared_ptr<Foo> : 0.000166 591%
std::unique_ptr<Foo> : 0.000167 595%
std::shared_ptr<FooBase>: 0.000171 611%
std::unique_ptr<FooBase>: 0.000171 611%
The speedup appears particularly when operator[](int i) does not perform expensive calculations which would make the call overhead negligible.
Consider now the case where an arithmetic operation on a vector expression is too expensive to calculate each time anew (e.g. an exponential moving average). Then one needs to memoize the result, for which as before a std::shared_ptr<std::vector<double> > is used.
template<typename _VectorType>
struct Average
{
using VectorType = std::decay_t<_VectorType>;
VectorType v;
std::shared_ptr<std::vector<double> > store;
auto operator[](int i) const
{
//if store[i] is filled, return it
//otherwise calculate average and store it.
}
};
In this setup, when the vector expression v is modified somewhere in the program, one needs to propagate that change to the dependent Average class (of which many copies can exists), such that store is recalculated -- otherwise it will contain wrong values. In this update process, however, store needs to be recalculated only once, regardless how many copies of the Average object exist.
This mix of shared-pointer and value semantics is the reason why I'm running in the somewhat confusing situation as above. My solution attempt is to enforce the same cardinality in the observer as in the updated objects -- this is the reason for the shared_ptr<Receiver>.

State data in functor members vs global function

When comparing functions and functors, it is often mentioned that one advantage of a functor over a function is that a functor is statefull.
However, in this code, it seems to me that a function may be statefull too. So what I am doing/understanding wrong?
struct Accumulator
{
int counter = 0;
int operator()(int i)
{
counter += i;
return counter;
}
};
int Accumulate(int i)
{
static int counter = 0;
counter += i;
return counter;
};
int main()
{
Accumulator acc;
std::vector<int> vec{1,2,3,4};
Accumulator acc2 = std::for_each(vec.begin(), vec.end(), acc);
int d1 = acc(0); // 0, acc is passed by value
int d2 = acc2(0); // 10
std::for_each(vec.begin(), vec.end(), Accumulate);
int d4 = Accumulate(0); // 10
return 0;
}
Each instance of a functor has its own state, while the static member of a function would be shared.
If you called for_each multiple times with the Accumulate() method, the counter would never reset, and each subsequent call would begin where the previous call ended. The functor would only have this behavior if each instance was reused. Creating a new functor would solve the problem.
You've used a static local variable to store state, but there's only one copy of the state no matter how many times you use Accumulate. And as chris points out, the initialization is only ever performed once.
With the functor, each new functor instance you create has its own independent state, initialized during instance creation.
Even if you provided a reset mechanism for the function version's state (for example, by moving the variable to a helper namespace where a second function can modify it), you still have only one accumulator at a time.
With functors, you have no problem developing a rule such as "prime numbers get accumulated here, even composites there, and odd composites into a third one" that uses three accumulators at once.

Is qsort thread safe?

I have some old code that uses qsort to sort an MFC CArray of structures but am seeing the occasional crash that may be down to multiple threads calling qsort at the same time. The code I am using looks something like this:
struct Foo
{
CString str;
time_t t;
Foo(LPCTSTR lpsz, time_t ti) : str(lpsz), t(ti)
{
}
};
class Sorter()
{
public:
static void DoSort();
static int __cdecl SortProc(const void* elem1, const void* elem2);
};
...
void Sorter::DoSort()
{
CArray<Foo*, Foo*> data;
for (int i = 0; i < 100; i++)
{
Foo* foo = new Foo("some string", 12345678);
data.Add(foo);
}
qsort(data.GetData(), data.GetCount(), sizeof(Foo*), SortProc);
...
}
int __cdecl SortProc(const void* elem1, const void* elem2)
{
Foo* foo1 = (Foo*)elem1;
Foo* foo2 = (Foo*)elem2;
// 0xC0000005: Access violation reading location blah here
return (int)(foo1->t - foo2->t);
}
...
Sorter::DoSort();
I am about to refactor this horrible code to use std::sort instead but wondered if the above is actually unsafe?
EDIT: Sorter::DoSort is actually a static function but uses no static variables itself.
EDIT2: The SortProc function has been changed to match the real code.
Your problem doesn't necessarily have anything to do with thread saftey.
The sort callback function takes in pointers to each item, not the item itself. Since you are sorting Foo* what you actually want to do is access the parameters as Foo**, like this:
int __cdecl SortProc(const void* elem1, const void* elem2)
{
Foo* foo1 = *(Foo**)elem1;
Foo* foo2 = *(Foo**)elem2;
if(foo1->t < foo2->t) return -1;
else if (foo1->t > foo2->t) return 1;
else return 0;
}
Your SortProc isn't returning correct results, and this likely leads to memory corruption by something assuming that the data is, well, sorted after you get done sorting it. You could even be leading qsort into corruption as it tries to sort, but that of course varies by implementation.
The comparison function for qsort must return negative if the first object is less than the second, zero if they are equal, and positive otherwise. Your current code only ever returns 0 or 1, and returns 1 when you should be returning negative.
int __cdecl Sorter::SortProc(const void* ap, const void* bp) {
Foo const& a = *(Foo const*)ap;
Foo const& b = *(Foo const*)bp;
if (a.t == b.t) return 0;
return (a.t < b.t) ? -1 : 1;
}
C++ doesn't really make any guarantees about thread safety. About the most you can say is that either multiple readers OR a single writer to a data structure will be OK. Any combination of readers and writers, and you need to serialise the access somehow.
Since you tagged your question with MFC tag I suppose you should select Multi-threaded Runtime Library in Project Settings.
Right now, your code is thread-safe, but useless, as the DoSort-method only uses local variables, and doesn't even return anything. If the data you are sorting is a member of Sorter, then it is not safe to call the function from multiple threads. In gerenal, read up on reentrancy, this may give you an idea of what you need to look out for.
what make it thread safe is, whether your object are thread safe, for example to make qsort thread-safe you must ensure that anything that write or read to or from and to the object are thread safe.
The pthreads man page lists the standard functions which are not required to be thread-safe. qsort is not among them, so it is required to be thread-safe in POSIX.
http://www.kernel.org/doc/man-pages/online/pages/man7/pthreads.7.html
I can't find the equivalent list for Windows, though, so this isn't really an answer to your question. I'd be a bit surprised if it was different.
Be aware what "thread-safe" means in this context, though. It means you can call the same function concurrently on different arrays -- it doesn't mean that concurrent access to the same data via qsort is safe (it isn't).
As a word of warning, you may find std::sort is not as fast as qsort. If you do find that try std::stable_sort.
I once wrote a BWT compressor based on the code presented my Mark Nelson in Dr Dobbs and when I turned it into classes I found that regular sort was a lot slower. stable_sort fixed the speed problems.

Creating function pointers to functions created at runtime

I would like to do something like:
for(int i=0;i<10;i++)
addresses[i] = & function(){ callSomeFunction(i) };
Basically, having an array of addresses of functions with behaviours related to a list of numbers.
If it's possible with external classes like Boost.Lambda is ok.
Edit: after some discussion I've come to conclusion that I wasn't explicit enough. Please read Creating function pointers to functions created at runtime
What I really really want to do in the end is:
class X
{
void action();
}
X* objects;
for(int i=0;i<0xFFFF;i++)
addresses[i] = & function(){ objects[i]->action() };
void someFunctionUnknownAtCompileTime()
{
}
void anotherFunctionUnknowAtCompileTime()
{
}
patch someFunctionUnknownAtCompileTime() with assembly to jump to function at addresses[0]
patch anotherFunctionUnknownAtCompileTime() with assembly to jump to function at addresses[1]
sth, I don't think your method will work because of them not being real functions but my bad in not explaining exactly what I want to do.
If I understand you correctly, you're trying to fill a buffer with machine code generated at runtime and get a function pointer to that code so that you can call it.
It is possible, but challenging. You can use reinterpret_cast<> to turn a data pointer into a function pointer, but you'll need to make sure that the memory you allocated for your buffer is flagged as executable by the operating system. That will involve a system call (LocalAlloc() on Windows iirc, can't remember on Unix) rather than a "plain vanilla" malloc/new call.
Assuming you've got an executable block of memory, you'll have to make sure that your machine code respects the calling convention indicated by the function pointer you create. That means pushing/popping the appropriate registers at the beginning of the function, etc.
But, once you've done that, you should be able to use your function pointer just like any other function.
It might be worth looking at an open source JVM (or Mono) to see how they do it. This is the essence of JIT compilation.
Here is an example I just hacked together:
int func1( int op )
{
printf( "func1 %d\n", op );
return 0;
}
int func2( int op )
{
printf( "func2 %d\n", op );
return 0;
}
typedef int (*fp)(int);
int main( int argc, char* argv[] )
{
fp funcs[2] = { func1, func2 };
int i;
for ( i = 0; i < 2; i++ )
{
(*funcs[i])(i);
}
}
The easiest way should be to create a bunch of boost::function objects:
#include <boost/bind.hpp>
#include <boost/function.hpp>
// ...
std::vector< boost::function<void ()> > functors;
for (int i=0; i<10; i++)
functors.push_back(boost::bind(callSomeFunction, i));
// call one of them:
functors[3]();
Note that the elements of the vector are not "real functions" but objects with an overloaded operator(). Usually this shouldn't be a disadvantage and actually be easier to handle than real function pointers.
You can do that simply by defining those functions by some arbitrary names in the global scope beforehand.
This is basically what is said above but modifying your code would look something like this:
std::vector<int (*) (int)> addresses;
for(int i=0;i<10;i++) {
addresses[i] = &myFunction;
}
I'm not horribly clear by what you mean when you say functions created at run time... I don't think you can create a function at run time, but you can assign what function pointers are put into your array/vector at run time. Keep in mind using this method all of your functions need to have the same signature (same return type and parameters).
You can't invoke a member function by itself without the this pointer. All instances of a class have the function stored in one location in memory. When you call p->Function() the value of p is stored somewhere (can't remember if its a register or stack) and that value is used as base offset to calculate locations of the member variables.
So this means you have to store the function pointer and the pointer to the object if you want to invoke a function on it. The general form for this would be something like this:
class MyClass {
void DoStuf();
};
//on the left hand side is a declaration of a member function in the class MyClass taking no parameters and returning void.
//on the right hand side we initialize the function pointer to DoStuff
void (MyClass::*pVoid)() = &MyClass::DoStuff;
MyClass* pMyClass = new MyClass();
//Here we have a pointer to MyClass and we call a function pointed to by pVoid.
pMyClass->pVoid();
As i understand the question, you are trying to create functions at runtime (just as we can do in Ruby). If that is the intention, i'm afraid that it is not possible in compiled languages like C++.
Note: If my understanding of question is not correct, please do not downvote :)