I'm trying to understand C++ Multithreading and synchronize between many threads.
Thus I created 2 threads the first one increments a value and the second one decrements it. what I can't understand why the resulted value after the execution is different than the first one, since I added and subtracted from the same value.
static unsigned int counter = 100;
static bool alive = true;
static Lock lock;
std::mutex mutex;
void add() {
while (alive)
{
mutex.lock();
counter += 10;
std::cout << "Counter Add = " << counter << std::endl;
mutex.unlock();
}
}
void sub() {
while (alive)
{
mutex.lock();
counter -= 10;
std::cout << "Counter Sub = " << counter<< std::endl;
mutex.unlock();
}
}
int main()
{
std::cout << "critical section value at the start " << counter << std::endl;
std::thread tAdd(add);
std::thread tSub(sub);
Sleep(1000);
alive = false;
tAdd.join();
tSub.join();
std::cout << "critical section value at the end " << counter << std::endl;
return 0;
}
Output
critical section value at the start 100
critical section value at the end 220
So what I need is how to keep my value as it's, I mean counter equal to 100 using those two threads.
The problem is that both threads will get into an "infinite" loop for 1 second and they will get greedy with the mutex. Do a print in both functions and see which thread gets the lock more often.
Mutexes are used to synchronize access to resources so that threads will not read/write incomplete or corrupted data, not create a neat sequence.
If you want to keep that value at 100 at the end of execution you need to use a semaphore so that there will be an ordered sequence of access to the variable.
I think, what you want is to signal to the subtracting thread, that you just have sucessfully added in the add thread, and vice versa. You'll have to additionally communicate the information, which thread is next. A naive solution:
bool shouldAdd = true;
add() {
while( alive ) {
if( shouldAdd ) {
// prefer lock guards over lock() and unlock() for exception safety
std::lock_guard<std::mutex> lock{mutex};
counter += 10;
std::cout << "Counter Add = " << counter << std::endl;
shouldAdd = false;
}
}
}
sub() {
while( alive ) {
if( !shouldAdd ) {
std::lock_guard<std::mutex> lock{mutex};
counter -= 10;
std::cout << "Counter Sub = " << counter << std::endl;
shouldAdd = true;
}
}
}
Now add() will busy wait for sub() to do its job before it will try and acquire the lock again.
To prevent busy waiting, you might chose a condition variable, instead of trying to only use a single mutex. You can wait() on the condition variable, before you add or subtract, and notify() the waiting thread afterwards.
Related
I have a thread that is doing "work", it is supposed to report progress when conditional variable notifies it. This thread is waiting for conditional variables.
Other thread is waiting for a x amount of milliseconds and then notifies conditional variable to proceed.
I have 5 conditional variables (this is an exercise for school) and once each gets notified work progress is supposed to be reported:
Problem im having is that thread 2, the one that is supposed to notify thread 1, goes through all 5 checkPoints and notifies only once in the end. So I end up in a situation where progress is at 20% in the end and thread 1 is waiting for another notify but thread 2 has finished all notifies.
Where is flaw in my implementation of this logic?
Code below:
#include <condition_variable>
#include <functional>
#include <iostream>
#include <mutex>
#include <thread>
using namespace std;
class Program {
public:
Program() {
m_progress = 0;
m_check = false;
}
bool isWorkReady() { return m_check; }
void loopWork() {
cout << "Working ... : " << endl;
work(m_cv1);
work(m_cv2);
work(m_cv3);
work(m_cv4);
work(m_cv5);
cout << "\nFinished!" << endl;
}
void work(condition_variable &cv) {
unique_lock<mutex> mlock(m_mutex);
cv.wait(mlock, bind(&Program::isWorkReady, this));
m_progress++;
cout << " ... " << m_progress * 20 << "%" << endl;
m_check = false;
}
void checkPoint(condition_variable &cv) {
lock_guard<mutex> guard(m_mutex);
cout << " < Checking >" << m_progress << endl;
this_thread::sleep_for(chrono::milliseconds(300));
m_check = true;
cv.notify_one();
}
void loopCheckPoints() {
checkPoint(m_cv1);
checkPoint(m_cv2);
checkPoint(m_cv3);
checkPoint(m_cv4);
checkPoint(m_cv5);
}
private:
mutex m_mutex;
condition_variable m_cv1, m_cv2, m_cv3, m_cv4, m_cv5;
int m_progress;
bool m_check;
};
int main() {
Program program;
thread t1(&Program::loopWork, &program);
thread t2(&Program::loopCheckPoints, &program);
t1.join();
t2.join();
return 0;
}
The loopCheckPoints() thread holds a lock for some time, sets m_check then releases the lock and immediately goes on to grab the lock again. The loopWork() thread may not have woken up in between to react to the m_check change.
Never hold locks for long times. Be as quick as possible. If you can't get the program to work without adding sleeps, you have a problem.
One way to fix this would be to check that the worker has actually set m_check back to false:
void work(condition_variable& cv) {
{ // lock scope
unique_lock<mutex> mlock(m_mutex);
cv.wait(mlock, [this] { return m_check; });
m_progress++;
cout << " ... " << m_progress * 20 << "%" << endl;
m_check = false;
}
// there's no need to hold the lock when notifying
cv.notify_one(); // notify that we set it back to false
}
void checkPoint(condition_variable& cv) {
// if you are going to sleep, do it without holding the lock
// this_thread::sleep_for(chrono::milliseconds(300));
{ // lock scope
lock_guard<mutex> guard(m_mutex);
cout << "<Checking> " << m_progress << endl;
m_check = true;
}
cv.notify_one(); // no need to hold the lock here
{
// Check that m_check is set back to false
unique_lock<mutex> mlock(m_mutex);
cv.wait(mlock, [this] { return not m_check; });
}
}
Where is flaw in my implementation of this logic?
cv.notify_one does not require, that the code after cv.wait(mlock, bind(&Program::isWorkReady, this)); continues immediatly, so it is perfectly valid that multiple checkPoint are exectued, before the code continues after cv.wait.
But after you the cv.wait you set m_check = false; to false, so if there is no further checkPoint execution remaining, that will set m_check = true;, your work function becomes stuck.
Instead of m_check being a bool you could think about making it a counter, that is incremented in checkPoint and decremented in work.
I'm trying to figure out how to use std::condition_variable in C++ implementing a "strange" producer and consumer program in which I had set a limit to the count variable.
The main thread ("producer") increments the count and must wait for this to return to zero to issue a new increment.
The other threads enters in a loop where they have to decrease the counter and issue the notification.
I am blocked because it is not clear to me how to conclude the program by orderly exiting the while loop inside the function of all threads.
Could someone give me some guidance on how to implement it, please?
Code
#include <iostream>
#include <thread>
#include <condition_variable>
#include <vector>
int main() {
int n_core = std::thread::hardware_concurrency();
std::vector<std::thread> workers;
int max = 100;
int count = 0;
std::condition_variable cv;
std::mutex mutex;
int timecalled = 0;
for (int i = 0; i < n_core; i++) {
workers.emplace_back(std::thread{[&max, &count, &mutex, &cv]() {
while (true) {
std::unique_lock<std::mutex> lk{mutex};
std::cout << std::this_thread::get_id() << " cv" << std::endl;
cv.wait(lk, [&count]() { return count == 1; });
std::cout << std::this_thread::get_id() << " - " << count << std::endl;
count--;
std::cout << std::this_thread::get_id() << " notify dec" << std::endl;
cv.notify_all();
}
}});
}
while (max > 0) {
std::unique_lock<std::mutex> lk{mutex};
std::cout << std::this_thread::get_id() << " cv" << std::endl;
cv.wait(lk, [&count]() { return count == 0; });
std::cout << std::this_thread::get_id() << " created token" << std::endl;
count++;
max--;
timecalled++;
std::cout << std::this_thread::get_id() << " notify inc" << std::endl;
cv.notify_all();
}
for (auto &w : workers) {
w.join();
}
std::cout << timecalled << std::endl; // must be equal to max
std::cout << count << std::endl; // must be zero
}
Problem
The program doesn't end because it is stuck on some final join.
Expected Result
The expected result must be:
100
0
Edits Made
EDIT 1 : I replaced max > 0 in the while with a true. Now the loops are unbounded, but using the solution of #prog-fh seems to work.
EDIT 2 : I added a variable to check the result in the end.
EDIT 3: I changed while(true) to while(max >0). Could this be a problem in concurrency because we are reading it without a lock?
The threads are waiting for something new in the call cv.wait().
But the only change that can be observed with the provided lambda-closure is the value of count.
The value of max must be checked too in order to have a chance to leave this cv.wait() call.
A minimal change in your code could be
cv.wait(lk, [&max, &count]() { return count == 1 || max<=0; });
if(max<=0) break;
assuming that changes to max always occur under the control of the mutex.
An edit to clarify around the accesses to max.
If the loop run by the threads is now while(true), then the max variable is only read in its body which is synchronised by mutex (thanks to lk).
The loop run by the main program is while (max > 0): max is read without synchronisation here but the only thread that can change this variable is the main program itself, so it's pure serial code from this perspective.
The whole body of this loop is synchronised by mutex (thanks to lk) so it is safe to change the value of max here since the read operations in the threads are synchronised in the same way.
You're having race conditions: in your code max may be read by multiple threads, whilst it is being modified in main, which is a race condition according to C++ standard.
The predicates you are using in wait seems to be incorrect (you're using ==).
I am implementing a producer consumer project in c++, and when I run the program, the same consumer grabs almost all of the work, without letting any of the other consumer threads grab any. Sometimes, other threads do get some work, but then that other thread takes control for a while. for example, TID 10 could grab almost all of the work, but then all of a sudden TID 12 would grab it, with no other consumer threads getting work in between.
Any idea why other threads wouldn't have a chance to grab work?
#include <thread>
#include <iostream>
#include <mutex>
#include <condition_variable>
#include <deque>
#include <csignal>
#include <unistd.h>
using namespace std;
int max_queue_size = 100;
int num_producers = 5;
int num_consumers = 7;
int num_operations = 40;
int operations_created = 0;
thread_local int operations_created_by_this_thread = 0;
int operations_consumed = 0;
thread_local int operations_consumed_by_this_thread = 0;
struct thread_stuff {
int a;
int b;
int operand_num;
char operand;
};
char operands[] = {'+', '-', '/', '*'};
deque<thread_stuff> q;
bool finished = false;
condition_variable cv;
mutex queue_mutex;
void producer(int n) {
while (operations_created_by_this_thread < num_operations) {
int oper_num = rand() % 4;
thread_stuff equation;
equation.a = rand();
equation.b = rand();
equation.operand_num = oper_num;
equation.operand = operands[oper_num];
while ((operations_created - operations_consumed) >= max_queue_size) {
// don't do anything until it has space available
}
{
lock_guard<mutex> lk(queue_mutex);
q.push_back(equation);
operations_created++;
}
cv.notify_all();
operations_created_by_this_thread++;
this_thread::__sleep_for(chrono::seconds(rand() % 2), chrono::nanoseconds(0));
}
{
lock_guard<mutex> lk(queue_mutex);
if(operations_created == num_operations * num_producers){
finished = true;
}
}
cv.notify_all();
}
void consumer() {
while (true) {
unique_lock<mutex> lk(queue_mutex);
cv.wait(lk, [] { return finished || !q.empty(); });
if(!q.empty()) {
thread_stuff data = q.front();
q.pop_front();
operations_consumed++;
operations_consumed_by_this_thread++;
int ans = 0;
switch (data.operand_num) {
case 0:
ans = data.a + data.b;
break;
case 1:
ans = data.a - data.b;
break;
case 2:
ans = data.a / data.b;
break;
case 3:
ans = data.a * data.b;
break;
}
cout << "Operation " << operations_consumed << " processed by PID " << getpid()
<< " TID " << this_thread::get_id() << ": "
<< data.a << " " << data.operand << " " << data.b << " = " << ans << " queue size: "
<< (operations_created - operations_consumed) << endl;
}
this_thread::yield();
if (finished) break;
}
}
void usr1_handler(int signal) {
cout << "Status: Produced " << operations_created << " operations and "
<< (operations_created - operations_consumed) << " operations are in the queue" << endl;
}
void usr2_handler(int signal) {
cout << "Status: Consumed " << operations_consumed << " operations and "
<< (operations_created - operations_consumed) << " operations are in the queue" << endl;
}
int main(int argc, char *argv[]) {
if (argc < 5) {
cout << "Invalid number of parameters passed in" << endl;
exit(1);
}
max_queue_size = atoi(argv[1]);
num_operations = atoi(argv[2]);
num_producers = atoi(argv[3]);
num_consumers = atoi(argv[4]);
// signal(SIGUSR1, usr1_handler);
// signal(SIGUSR2, usr2_handler);
thread producers[num_producers];
thread consumers[num_consumers];
for (int i = 0; i < num_producers; i++) {
producers[i] = thread(producer, num_operations);
}
for (int i = 0; i < num_consumers; i++) {
consumers[i] = thread(consumer);
}
for (int i = 0; i < num_producers; i++) {
producers[i].join();
}
for (int i = 0; i < num_consumers; i++) {
consumers[i].join();
}
cout << "finished!" << endl;
}
You're holding the mutex the whole time--including yield()-ing while holding the mutex.
Scope the unique_lock like you do in your producer's code, popping from the queue and incrementing the counter atomically.
I see that you have a max queue size. You need a 2nd condition for the producer to wait on if the queue is full, and the consumer will signal this condition as it consumes items.
Any idea why other threads wouldn't have a chance to grab work?
This poll is troubling:
while ((operations_created - operations_consumed) >= max_queue_size)
{
// don't do anything until it has space available
}
You might try a minimal delay in the loop ... this is a 'bad neighbor', and can 'consume' a core.
There are few issues with your code:
Using Normal Variables for Inter-Thread Communication
Here is an example:
int operations_created = 0;
int operations_consumed = 0;
void producer(int n) {
[...]
while ((operations_created - operations_consumed) >= max_queue_size) { }
and later
void consumer() {
[...]
operations_consumed++;
This will work only on x86 architectures without optimizations, i.e. -O0. Once we try to enable optimizations, the compiler will optimize the while loop to:
void producer(int n) {
[...]
if ((operations_created - operations_consumed) >= max_queue_size) {
while (true) { }
}
So, your program simply hang here. You can check this on Compiler Explorer.
mov eax, DWORD PTR operations_created[rip]
sub eax, DWORD PTR operations_consumed[rip]
cmp eax, DWORD PTR max_queue_size[rip]
jl .L19 // here is the if before the loop
.L20:
jmp .L20 // here is the empty loop
.L19:
Why is this happening? From the single-thread program point of view, while (condition) { operators } is exact equivalent to if (condition) while (true) { operators } if operators do not change the condition.
To fix the issue, we should use std::atomic<int> instead of simple int. Those are designed for inter-thread communication and so compiler will avoid such optimizations and generate the correct assembly.
Consumer Locks The Mutex while yield()
Have a look at this snippet:
void consumer() {
while (true) {
unique_lock<mutex> lk(queue_mutex);
[...]
this_thread::yield();
[...]
}
Basically this mean that consumer does the yield() holding the lock. Since only one consumer can hold a lock at a time (mutex stands for mutual exclusion), that explains why other consumers cannot consume the work.
To fix this issue, we should unlock the queue_mutex before the yield(), i.e.:
void consumer() {
while (true) {
{
unique_lock<mutex> lk(queue_mutex);
[...]
}
this_thread::yield();
[...]
}
This still does not guarantee that only one thread will do most of the tasks. When we do notify_all() in producer, all threads get woke up, but only one will lock the mutex. Since the work we schedule is tiny, by the time producer calls notify_all() our thread will finish the work, done the yield() and will be ready for the next work.
So why this thread locks the mutex, but not the other one then? I guess that is happening due to CPU cache and busy waiting. The thread just finished the work is "hot", it is in CPU cache and ready to lock the mutex. Before go to sleep it also might try to busy wait for mutex few cycles, which increases its chances to win even more.
To fix this, we can either remove the sleep in producer (so it will wake up other threads more often, so other threads will be "hot" as well), or do a sleep() in the consumer instead of yield() (so this thread becomes "cold" during the sleep).
Anyway, there is no opportunity to do the work in parallel due to mutex, so the fact that same thread does most of the work is completely natural IMO.
I have created a producer / consumer code as following
class CTest{
public:
void producer( int i ){
unique_lock<mutex> l(m);
q.push(i);
if( q.size() )
cnd.notify_all();
}
void consumer(int i ){
unique_lock<mutex> l(m);
while( q.empty() ){
cnd.wait(l );
}
if( q.empty())
return;
cout << "IM AWAKE :" << i << endl;
int tmp = q.front();
q.pop();
l.unlock();
cout << "Producer got " << tmp << endl;
}
void ConsumerInit( int threads ){
for( int i = 0; i < threads; i++ ){
thrs.push_back(thread(&CTest::consumer, this ,i));
}
}
void waitForTHreads(){
for( auto &a : thrs )
a.join();
}
void printQueue(){
while( ! q.empty()){
int tmp = q.front();
q.pop();
cout << "Queue got " << tmp << endl;
}
}
private:
queue<int> q;
vector<thread> thrs;
mutex m;
condition_variable cnd;
};
and main
int main(){
int x;
CTest t;
int counter = 0;
while( cin >> x ){
if( x == 0 ){
cout << "yay" << endl;;
break;
}
if( x == 1)
t.producer(counter++);
if( x == 2 )
t.ConsumerInit(5);
}
t.waitForTHreads();
t.printQueue();
return 0;
}
What this code does it , when user inputs "1" it will add number to the queue ,when user inputs "2" , 5 threads are spawned to retrieve data from queue and print it. However my problem is as followng , when i input
6 numbers , only 5 of them are printed due to fact that only 5 threads are spawned , what i want to do is thread to retrieve a data from queue , print int, and then again waiting if it can print another data. This way all N > 5 numbers would pri printed with just 5 threads.
My question is , what is standard way how to achieve this? I read few documens but didnt fint/cannot think of good solution. How are problems like this solved?
when i try to create simple thread pool :
void consumer(int i ){
while(true){
{
unique_lock<mutex> l(m);
while( q.empty() ){
cnd.wait(l );
}
if( q.empty())
return;
cout << "IM AWAKE :" << i << endl;
int tmp = q.front();
q.pop();
cout << "Producer " << i << " got " << tmp << endl;
} //consumer(i);
}
}
and input N number all numbers are processed by one thread.
Thanks for help!
The current version of consumer can only read one value before exiting. In order to read more, it must loop, and this leads to your second version of consumer which has two problems:
Consumption here is so quick that the first thread into the queue can consume the whole queue within its timeslice (or however CPU is being allocated). Insert a yield or a sleep to force the OS to switch tasks.
The mutex is not unlocked so no other threads are able to get in.
Fortunately you aren't creating the threads until you need them and they terminate after the queue is empty so the whole deal with conditional_variable can go out the window.
void consumer(int i)
{
unique_lock<mutex> l(m);
while (!q.empty())
{
int tmp = q.front();
q.pop();
cout << i << " got " << tmp << endl;
// note: In the real world, locking around a cout is gross. cout is slow,
// so you want the unlock up one line. But...! This allows multiple threads
// to write to the consle at the same time and that makes your output
// look like it was tossed into a blender, so we'll take the performance hit
l.unlock(); // let other threads have a turn
this_thread::yield();
l.lock(); // lock again so the queue can be safely inspected
}
}
If you need to go with the threadpool approach, things get a little messier and the condition variable makes a return.
void consumer(int i)
{
while (true)
{
unique_lock<mutex> l(m);
if (q.empty())
{
cnd.wait(l);
}
if (!q.empty()) // OK. We got out of the conditional wait, but have
// other threads sucked the queue dry? Better check.
{
int tmp = q.front();
q.pop();
cout << i << " got " << tmp << endl;
}
l.unlock();
this_thread::yield();
}
}
An atomic<bool> terminated may be helpful to allow an orderly shutdown while (true) does not allow for.
In general, without going into code details, a threadpool is created and the threads are put in a wait state (waiting on one or more events / signals, or in your case condition_variable cnd;) - I'm used to work with events, so I'll use that in the following text, but a condition_variable should work in a similar way.
When a task is added to the queue, a task-event is set/fired and one ore more threads wake up (depending on the event (single / multi)).
When a thread wakes up, it checks (with a lock) if there is a task available, if available, executes the task and when finished checks again (!) if there are more tasks waiting. (because when you add 8 tasks in one go, 5 threads become active, so they need to check if there are more tasks after finishing their first one.
If there are no jobs left, the thread goes back in the wait state (waiting for a next job, or a quit event).
When quitting the application, another, say quit-event, is set for all threads (you can't just wait for the threads to finish, because the threads themselves are waiting on an event to do some work) -- or you could fire the same event, and first set a volatile variable, which the threads should then first check on any event to see if they need to quit, or do another job. Then you can wait for the threads to 'come home'.
A lock should be held as short as possible.
As for your code:
void producer( int i ){
unique_lock<mutex> l(m);
q.push(i);
if( q.size() )
cnd.notify_all();
}
Here the lock is held longer than needed (and perhaps too long). You also just pushed a value, so q will not be empty (no need to check). Since you only add one item (task), only one thread should be woken up (so notify_one() should be fine here).
So you should: lock, push, unlock, notify - instead of unlock, you can place the lock and push inside brackets, which will trigger an unlock in the unique_lock<> destructor.
void consumer(int i ){
unique_lock<mutex> l(m);
while( q.empty() ){
cnd.wait(l );
}
if( q.empty())
return;
cout << "IM AWAKE :" << i << endl;
int tmp = q.front();
q.pop();
l.unlock();
cout << "Producer got " << tmp << endl;
}
Here you should lock, check queue, pop if there is a task, unlock, if no task, put the thread in a wait state again, else do work with the popped value (after unlocking), and then check again if there is more work to do. Normally it is not a good idea to call cout while the data is locked.. but for a small test you could get away with it, especially because cout needs to be synchronized too (but it would be cleaner to synchronize cout on its own, separate from your data lock).
void printQueue(){
while( ! q.empty()){
int tmp = q.front();
q.pop();
cout << "Queue got " << tmp << endl;
}
}
Make sure your data is locked here too! (although it's only called from main after the threads have finished, the function is in your class, and the data should be locked).
I am trying an example, which causes race condition to apply the mutex. However, even with the mutex, it still happens. What's wrong? Here is my code:
#include <iostream>
#include <boost/thread.hpp>
#include <vector>
using namespace std;
class Soldier
{
private:
boost::thread m_Thread;
public:
static int count , moneySpent;
static boost::mutex soldierMutex;
Soldier(){}
void start(int cost)
{
m_Thread = boost::thread(&Soldier::process, this,cost);
}
void process(int cost)
{
{
boost::mutex::scoped_lock lock(soldierMutex);
//soldierMutex.lock();
int tmp = count;
++tmp;
count = tmp;
tmp = moneySpent;
tmp += cost;
moneySpent = tmp;
// soldierMutex.unlock();
}
}
void join()
{
m_Thread.join();
}
};
int Soldier::count, Soldier::moneySpent;
boost::mutex Soldier::soldierMutex;
int main()
{
Soldier s1,s2,s3;
s1.start(20);
s2.start(30);
s3.start(40);
s1.join();
s2.join();
s3.join();
for (int i = 0; i < 100; ++i)
{
Soldier s;
s.start(30);
}
cout << "Total soldier: " << Soldier::count << '\n';
cout << "Money spent: " << Soldier::moneySpent << '\n';
}
It looks like you're not waiting for the threads started in the loop to finish. Change the loop to:
for (int i = 0; i < 100; ++i)
{
Soldier s;
s.start(30);
s.join();
}
edit to explain further
The problem you saw was that the values printed out were wrong, so you assumed there was a race condition in the threads. The race in fact was when you printed the values - they were printed while not all the threads had a chance to execute
Based on this and your previous post (were it does not seem you have read all the answers yet). What you are looking for is some form of synchronization point to prevent the main() thread from exiting the application (because when the main thread exits the application all the children thread die).
This is why you call join() all the time to prevent the main() thread from exiting until the thread has exited. As a result of your usage though your loop of threads is not parallel and each thread is run in sequence to completion (so no real point in using the thread).
Note: join() like in Java waits for the thread to complete. It does not start the thread.
A quick look at the boost documentation suggests what you are looking for is a thread group which will allow you to wait for all threads in the group to complete before exiting.
//No compiler so this is untested.
// But it should look something like this.
// Note 2: I have not used boost::threads much.
int main()
{
boost::thread_group group;
boost::ptr_vector<boost::thread> threads;
for(int loop = 0; loop < 100; ++loop)
{
// Create an object.
// With the function to make it start. Store the thread in a vector
threads.push_back(new boost::thread(<Function To Call>));
// Add the thread to the group.
group.add(threads.back());
}
// Make sure main does not exit before all the threads have completed.
group.join_all();
}
If we go back to your example and retrofit your Soldier class:
int main()
{
boost::thread batallion;
// Make all the soldiers part of a group.
// When you start the thread make the thread join the group.
Soldier s1(batallion);
Soldier s2(batallion);
Soldier s3(batallion);
s1.start(20);
s2.start(30);
s3.start(40);
// Create 100 soldiers outside the loo
std::vector<Soldier> lotsOfSoldiers;
lotsOfSoldiers.reserve(100); // to prevent reallocation in the loop.
// Because you are using objects we need to
// prevent copying of them after the thread starts.
for (int i = 0; i < 100; ++i)
{
lotsOfSoldiers.push_back(Solder(batallion));
lotsOfSoldiers.back().start(30);
}
// Print out values while threads are still running
// Note you may get here before any thread.
cout << "Total soldier: " << Soldier::count << '\n';
cout << "Money spent: " << Soldier::moneySpent << '\n';
batallion.join_all();
// Print out values when all threads are finished.
cout << "Total soldier: " << Soldier::count << '\n';
cout << "Money spent: " << Soldier::moneySpent << '\n';
}