I would like to launch a member function in a separate thread calling it from another member.
Maybe the code below is clearer.
There is a button which launches the counter in a thread and it works:
void MainWindow::on_pushButton_CountNoArgs_clicked()
{
myCounter *counter = new myCounter;
QFuture<void> future = QtConcurrent::run(counter, &myCounter::countUpToThousand);
}
MyCounter class member functions:
void myCounter::countUpToHundred()
{
for(int i = 0; i<=100; i++)
{
qDebug() << "up to 100: " << i;
}
}
void myCounter::countUpToThousand()
{
for(int i = 0; i<=1000; i++)
{
qDebug() << "up to 1000: " << i;
if (i == 500)
{
//here I want to launch myCounter::countUpToHundred() in another thread
}
}
}
Thanks in advance.
Assuming you want to run the 2 counters parallel, you have 3 threads:
Thread 1: UI-Thread (or main thread)
Here runs on_pushButton_CountNoArgs_clicked(). You should not do hard work in this function because if you want to achive 60 frames per second, you only have 16 ms for all the work. To starting a new thread to run countUpToThousand() is a good idea.
Thread 2: background thread (started by QtConcurrent, running countUpToThousand)
This runs in parallel to Thread 1, and you are working with the same instance of myCounter (i.e. the same place in memory) so be careful which member variables you read and write.
Thread 3: background thread (started by QtConcurrent, running countUpToHundred)
Start using (as hank pointed out)
void myCounter::countUpToThousand()
{
for(int i = 0; i<=1000; i++)
{
qDebug() << "up to 1000: " << i;
if (i == 500)
{
QtConcurrent::run(this, &myCounter::countUpToHundred);
}
}
}
This will run in parallel to Thread 1 and Thread 2.
Now you might get crazy output results like 988\n99\n when one counter is at 999 and the other is at 88 because Thread 2 and Thread 3 will be printing to console at the same time and don't care about what the other thread is doing.
Also note that you must not delete counter before Thread 2 and Thread 3 are done because of you do, the'll still try to access the memory and your application will probably crash.
Related
Suppose I have a client thread and a server thread. The client thread must perform an expensive for loop operation which is prone to hanging. Thus, the server has independently determine whether each tick of the for loop has exceeded the max time. The context behind this is that the server will timeout the client if it takes too long to complete a tick.
My initial idea below is to have two for loops in the client and server thread. The server thread will have a condition variable that waits for 1 second. If the client does not notify the condition variable in 1 second every tick, the server will time it out:
Server
bool success;
for (int i = 0; i < 10; i++) {
std::unique_lock<std::mutex> lock(CLIENT_MUTEX);
success = CLIENT_CV.wait_for(lock, std::chrono::seconds(1));
if (!success) {
std::cout << "timed out during tick " << i << std::endl;
break;
}
}
Client
for (int i = 0; i < 10; i++) {
std::unique_lock<std::mutex> lock(CLIENT_MUTEX);
//do work
CLIENT_CV.notify_one();
}
However my implementation attempt is unreliable and times out at random times given the same work for the client. How can I improve the design to make it more reliable?
Side Note:
A simple solution to this would be for the server to time the entire for loop as opposed to each tick. However if the for loop fails on tick 1 out of 10, and the timer is waiting for 10 seconds, then the client will be informed after 10 seconds. However if the server was to impose a 1 second timeout for each tick (10x1sec = 10secs) then the client will be informed of timeout without having to wait the full 10 seconds.
Edit.
This whole client/server/timeout analogy is simply to put the question into context. I'm purely interested in the best way to time the for loop from a different thread.
One way of doing this might be:
Shared vars:
std::vector<std::chrono::time_point<std::chrono::high_resolution_clock>> ledger;
std::mutex ledger_mtx;
Client:
for (int i = 0; i < 10; i++) {
{
std::scoped_lock lock(ledger_mtx);
ledger.push_back(std::chrono::high_resolution_clock::now());
}
// Do work
}
{
std::scoped_lock lock(ledger_mtx);
ledger.push_back(std::chrono::high_resolution_clock::now());
}
Server:
size_t id = 0;
std::this_thread::wait_for(1s); // Some time so that initial write to ledger is made
while(true) {
{
std::scoped_lock lock(ledger_mtx);
if(ledger.size()==id) { /* Do something if the thread hangs */ }
id = ledger.size();
std::chrono::time_point<std::chrono::high_resolution_clock> last_tick = ledger.back();
}
if(id == 11) break;
std::this_thread::sleep_for(1s - (std::chrono::high_resolution_clock::now() - last_tick));
}
This way you can time the thread, while monitoring it from the outside. Is it the best way? probably not, but it does give you the times you need.
I want to create over 500 threads in c++ on beaglebone black
but the program has errors.
could you explain why the errors is occured and how I fix the errors
in thread func. : call_from_thread(int tid)
void call_from_thread(int tid)
{
cout << "thread running : " << tid << std::endl;
}
in main func.
int main() {
thread t[500];
for(int i=0; i<500; i++) {
t[i] = thread(call_from_thread, i);
usleep(100000);
}
std::cout << "main fun start" << endl;
return 0;
}
I expects
...
...
thread running : 495
thread running : 496
thread running : 497
thread running : 498
thread running : 499
main fun start
but
...
...
thread running : 374
thread running : 375
thread running : 376
thread running : 377
thread running : 378
terminate called after throwing an instance of 'std::system_error'
what(): Resource temporarily unavailable
Aborted
could you help me?
The beaglebone black appears to have a maximum of 512MB of DRAM.
The minimum stack size of a thread according to pthread_create() is 2MB.
i.e. 2^29 / 2^21 = 2^8 = 256. So what you're probably seeing around thread 374 is the allocator cannot free memory fast enough to meet the demand which
is handled by throwing an exception.
If you really want to see this explode, try moving that sleep call inside your thread function. :)
You could try preallocating the stack to 1MB or less (pthreads), but that has it's
own set of problems.
The questions to really ask yourself is:
Is my application io bound or compute bound?
What's my memory budget to run this application? If you spend your entire physical memory
on thread stacks, you'll have nothing left for the shared program heap.
Do I really need this much parallelism to do the job? The A8 is a single core machine BTW.
Could I solve the problem using a thread pool? Or not use threads at all?
Finally, you can't set the stack size in std::thread api, but you can in
boost::thread.
Or just write a thin wrapper around pthreads (assuming Linux).
Whenever you use threads, there are three parts.
Start the threads
Do the work
Release the thread
You're starting the threads and doing the work, but you're not releasing them.
Releasing threads. There are two options for releasing a thread.
You can join the thread (which basically waits for it to finish)
You can detach the thread, and let it execute independently.
In this particular case, you don't want the program to finish until all threads are done executing, so you should join them.
#include <iostream>
#include <thread>
#include <vector>
#include <string>
auto call_from_thread = [](int i) {
// I create the entire message before printing it, so that there's no interleaving of messages between threads
std::string message = "Calling from thread " + std::to_string(i) + '\n';
// Because I only call print once, everything gets printed together
std::cout << message;
};
using std::thread;
int main() {
thread t[500];
for(int i=0; i<500; i++) {
// Here, I don't have to start the thread with any delay
t[i] = thread(call_from_thread, i);
}
std::cout << "main fun start\n";
// I join each thread (which waits for them to finish before closing the program)
for(auto& item : t) {
item.join();
}
return 0;
}
I want to make a program that gets the ids from a database and create a thread with the same function for each id. It works, but when I add a while loop to the function it only hangs there and doesn't get the next id's.
My code is:
void foo(char* i) {
while(1){
std::cout << i;
}
}
void makeThreads()
{
int i;
MYSQL *sqlhnd = mysql_init(NULL);
mysql_real_connect(sqlhnd, "127.0.0.1", "root", "h0flcepqE", "Blazor", 3306, NULL, 0);
mysql_query(sqlhnd, "SELECT id FROM `notifications`");
MYSQL_RES *confres = mysql_store_result(sqlhnd);
int totalrows = mysql_num_rows(confres);
int numfields = mysql_num_fields(confres);
MYSQL_FIELD *mfield;
MYSQL_ROW row;
while((row = mysql_fetch_row(confres)))
{
for(i = 0; i < numfields; i++)
{
printf("%s", row[i]);
std::thread t(foo, row[i]);
t.join();
}
}
}
int main()
{
makeThreads();
return 0;
}
Output is:
1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
Thanks
The for loop in question currently creates one thread object and one thread. Period. joining hides this problem in a way by forcing the main thread to wait for the thread to run to completion. That the thread can't is another issue.
Creating a thread and immediately joining forces your program to run sequentially and defeats the point of using threads. Not joining the thread will result in Bad because the thread object will be destroyed at the end of the loop and the thread has not been detached. Destroying an undetached thread is bad. std::terminate does pretty much what it sounds like it does: It hunts down and kills Sarah Connor. Just kidding. It ends your program with all the subtlety of a headsman's axe.
You could detach the threads manually by calling detach, but that's a really, really Bad Idea because you lose control of the thread and your program will exit while the threads are still running.
You need to store these threads and join them later, after the loop that spawns them.
Here's a simple approach to do that:
std::vector<std::thread> threads;
for(i = 0; i < numfields; i++)
{
std::cout << row[i];
threads.push_back(std::thread(foo, row[i]));
}
for (std::thread & t: threads)
{
t.join();
}
Now you will have numfields threads running forever, and I'm sure you can take care of that problem on your own.
t.join();
Means the program waits here for the thread t to finish.
Since:
t executes foo
foo never ends, due to while true
Then: you never execute the instructions after the join
So you have the uninterrupted 111111
I am trying to use the Threaded Building Blocks task_arena. There is a simple array full of '0'. Arena's threads put '1' in the array on the odd places. Main thread put '2' in the array on the even places.
/* Odd-even arenas tbb test */
#include <tbb/parallel_for.h>
#include <tbb/blocked_range.h>
#include <tbb/task_arena.h>
#include <tbb/task_group.h>
#include <iostream>
using namespace std;
const int SIZE = 100;
int main()
{
tbb::task_arena limited(1); // no more than 1 thread in this arena
tbb::task_group tg;
int myArray[SIZE] = {0};
//! Main thread create another thread, then immediately returns
limited.enqueue([&]{
//! Created thread continues here
tg.run([&]{
tbb::parallel_for(tbb::blocked_range<int>(0, SIZE),
[&](const tbb::blocked_range<int> &r)
{
for(int i = 0; i != SIZE; i++)
if(i % 2 == 0)
myArray[i] = 1;
}
);
});
});
//! Main thread do this work
tbb::parallel_for(tbb::blocked_range<int>(0, SIZE),
[&](const tbb::blocked_range<int> &r)
{
for(int i = 0; i != SIZE; i++)
if(i % 2 != 0)
myArray[i] = 2;
}
);
//! Main thread waiting for 'tg' group
//** it does not create any threads here (doesn't it?) */
limited.execute([&]{
tg.wait();
});
for(int i = 0; i < SIZE; i++) {
cout << myArray[i] << " ";
}
cout << endl;
return 0;
}
The output is:
0 2 0 2 ... 0 2
So the limited.enque{tg.run{...}} block doesn't work.
What's the problem? Any ideas? Thank you.
You have created limited arena for one thread only, and by default this slot is reserved for the master thread. Though, enqueuing into such a serializing arena will temporarily boost its concurrency level to 2 (in order to satisfy 'fire-and-forget' promise of the enqueue), enqueue() does not guarantee synchronous execution of the submitted task. So, tg.wait() can start before tg.run() executes and thus the program will not wait when the worker thread is created, joins the limited arena, and fills the array with '1' (BTW, the whole array is filled in each of 100 parallel_for iterations).
So, in order to wait for the tg.run() to complete, use limited.execute instead. But it will prevent automatic enhancing of the limited concurrency level and the task will be deferred till tg.wait() executed by master thread.
If you want to see asynchronous execution, set arena's concurrency to 2 manually: tbb::task_arena limited(2);
or disable slot reservation for master thread: tbb::task_arena limited(1,0) (but note, it implies additional overheads for dynamic balancing of the number of threads in arena).
P.S. TBB has no points where threads are guaranteed to come (unlike OpenMP). Only enqueue methods guarantee creation of at least one worker thread, but it says nothing about when it will come. See local observer feature to get notification when threads are actually joining arenas.
For example I want each thread to not start running until the previous one has completed, is there a flag, something like thread.isRunning()?
#include <iostream>
#include <vector>
#include <thread>
using namespace std;
void hello() {
cout << "thread id: " << this_thread::get_id() << endl;
}
int main() {
vector<thread> threads;
for (int i = 0; i < 5; ++i)
threads.push_back(thread(hello));
for (thread& thr : threads)
thr.join();
cin.get();
return 0;
}
I know that the threads are meant to run concurrently, but what if I want to control the order?
There is no thread.isRunning(). You need some synchronization primitive to do it.
Consider std::condition_variable for example.
One approachable way is to use std::async. With the current definition of std::async is that the associated state of an operation launched by std::async can cause the returned std::future's destructor to block until the operation is complete. This can limit composability and result in code that appears to run in parallel but in reality runs sequentially.
{
std::async(std::launch::async, []{ hello(); });
std::async(std::launch::async, []{ hello(); }); // does not run until hello() completes
}
If we need the second thread start to run after the first one is completed, is a thread really needed?
For solution I think try to set a global flag, the set the value in the first thread, and when start the second thread, check the flag first should work.
You can't simply control the order like saying "First, thread 1, then thread 2,..." you will need to make use of synchronization (i.e. std::mutex and condition-variables std::condition_variable_any).
You can create events so as to block one thread until a certain event happend.
See cppreference for an overview of threading-mechanisms in C++-11.
You will need to use semaphore or lock.
If you initialize semaphore to value 0:
Call wait after thread.start() and call signal/ release in the end of thread execution function (e.g. run funcition in java, OnExit function etc...)
So the main thread will keep waiting until the thread in loop has completed its execution.
Task-based parallelism can achieve this, but C++ does not currently offer task model as part of it's threading libraries. If you have TBB or PPL you can use their task-based facilities.
I think you can achieve this by using std::mutex and std::condition_variable from C++11. To be able to run threads sequentially array of booleans in used, when thread is done doing some work it writes true in specific index of the array.
For example:
mutex mtx;
condition_variable cv;
int ids[10] = { false };
void shared_method(int id) {
unique_lock<mutex> lock(mtx);
if (id != 0) {
while (!ids[id - 1]) {
cv.wait(lock);
}
}
int delay = rand() % 4;
cout << "Thread " << id << " will finish in " << delay << " seconds." << endl;
this_thread::sleep_for(chrono::seconds(delay));
ids[id] = true;
cv.notify_all();
}
void test_condition_variable() {
thread threads[10];
for (int i = 0; i < 10; ++i) {
threads[i] = thread(shared_method, i);
}
for (thread &t : threads) {
t.join();
}
}
Output:
Thread 0 will finish in 3 seconds.
Thread 1 will finish in 1 seconds.
Thread 2 will finish in 1 seconds.
Thread 3 will finish in 2 seconds.
Thread 4 will finish in 2 seconds.
Thread 5 will finish in 0 seconds.
Thread 6 will finish in 0 seconds.
Thread 7 will finish in 2 seconds.
Thread 8 will finish in 3 seconds.
Thread 9 will finish in 1 seconds.