I was searching long before ask this question, and I can't find how to solve my problem.
I have five threads(Workers), this workers are mining gold,transport gold to avant poste and unload it there.
And my problem is there that when the worker is mining gold, user can input b to check is there enough gold, and if this is true to build barrack.
When worker is mining gold there is 2 sec sleep that is why I use pthread_cond_timedwait().
I have global variables which are storing barracks number, gold on map and gold in avant poste
Here is the pseudo code.
void makeBarrack(size_t data) {
timespec waitTime = { 2, 0 };
pthread_mutex_lock(&check_mutex);
while (wantBarrack) {
pthread_cond_timedwait(&condp, &gold_mutex, &waitTime);
}
std::cout << "Worker" << data << "is making barrack" << std::endl;
wantBarrack = false;
pthread_mutex_lock(&unload_mutex);
avantPost -= 100;
pthread_mutex_unlock(&unload_mutex);
barracks++;
pthread_mutex_unlock(&check_mutex);
}
void *work(void *data, char input) {
size_t thread_num = (size_t) data;
pthread_mutex_lock(&gold_mutex);
timespec waitTime = { 2, 0 };
if ((input == 'B' || input == 'b') && avantPost >= 100) {
wantBarrack = true;
input = 0;
} else if ((input == 'B' || input == 'b') && avantPoste < 100) {
std::cout << "There is " << avantPoste << " gold" << std::endl;
}
while (wantBarrack) {
pthread_cond_timedwait(&condp, &gold_mutex, &waitTime);
}
makeBarrack(data);
}
I an trying to make something like consumer producer but in my task I need to do something(mine gold) instead of waiting other threads to mine.
Other question is do I need to use same mutex in this two functions?
P.S.
I am novice in multithreading and it will be good someone to edit my question if there is something wrong.
The problem was threre that I've learnt that I can use cv in simple if.The main reason to use cv is thath we can block our thread without blocking other threads (It's unlocking the mutex while waiting on cv).And we just need to signal thath the conditition is done and we are ready to unblock(release) the thread and make the function we want. I am using pthread_cond_timedwait()
because I can block my thread for time I want.
Related
I'm learning how to use RDMA via Inifniband and one problem I'm having is using a connection with more than 1 thread because I cant figure out how to create another completion queue so the work completions get mixed up between the threads and it craps out, how do I create a queue for each thread using the connection?
Take this vomit for example:
void worker(struct ibv_cq* cq){
while(conn->peer_mr.empty()) Sleep(1);
struct ibv_wc wc{};
struct ibv_send_wr wr{};
memset(&wr, 0, sizeof wr);
struct ibv_sge sge{};
sge.addr = reinterpret_cast<unsigned long long>(conn->rdma_memory_region);
sge.length = RDMA_BUFFER_SIZE;
sge.lkey = conn->rdma_mr->lkey;
wr.wr_id = reinterpret_cast<unsigned long long>(conn);
wr.opcode = IBV_WR_RDMA_READ;
wr.sg_list = &sge;
wr.num_sge = 1;
wr.send_flags = IBV_SEND_SIGNALED;
struct ibv_send_wr* bad_wr = nullptr;
while(true){
if(queue >= maxqueue) continue;
for(auto i = 0ULL; i < conn->peer_mr.size(); ++i){
wr.wr.rdma.remote_addr = reinterpret_cast<unsigned long long>(conn->peer_mr[i]->mr.addr) + conn->peer_mr[i]->offset;
wr.wr.rdma.rkey = conn->peer_mr[i]->mr.rkey;
const auto err = ibv_post_send(conn->qp, &wr, &bad_wr);
if(err){
std::cout << "ibv_post_send " << err << "\n" << "Errno: " << std::strerror(errno) << "\n";
exit(err);
}
++queue;
conn->peer_mr[i]->offset += RDMA_BUFFER_SIZE;
if(conn->peer_mr[i]->offset >= conn->peer_mr[i]->mr.length) conn->peer_mr[i]->offset = 0;
}
int ne;
do{
ne = ibv_poll_cq(cq, 1, &wc);
} while(!ne);
--queue;
++number;
}
}
If I had more than one of them they would all be receiving each others work completions, I want them to receive only their own and not those of other threads.
The completion queues are created somewhere outside of this code (you are passing in an ibv_cq *). If you'd like to figure out how to create multiple ones, that's the area to focus on.
However, the "crapping out" is not (just) happening because completions are mixed up between threads: the ibv_poll_cq and ibv_post_send functions are thread safe. Instead, the likely problem is that your code isn't thread-safe: there are shared data structures that are accessed without locks (conn->peer_mr). You would have the same issues even without RDMA.
The first step is to figure out how to split up the work into pieces. Think about the pieces that each thread will need to make it independent from the others. It'll likely be a single peer_mr, a separate ibv_cq *, and a specific chunk of your rdma_mr. Then code that :)
According to the documentation
the currently-running fiber retains control until it invokes some
operation that passes control to the manager
I can think about only one operation - boost::this_fiber::yield which may cause control switch from fiber to fiber. However, when I run something like
bf::fiber([](){std::cout << "Bang!" << std::endl;}).detach();
bf::fiber([](){std::cout << "Bung!" << std::endl;}).detach();
I get output like
Bang!Bung!
\n
\n
Which means control was passed between << operators from one fiber to another. How it could happen? Why? What is the general definition of controll passing from fiber to fiber in the context of boost::fiber library?
EDIT001:
Cant get away without code:
#include <boost/fiber/fiber.hpp>
#include <boost/fiber/mutex.hpp>
#include <boost/fiber/barrier.hpp>
#include <boost/fiber/algo/algorithm.hpp>
#include <boost/fiber/algo/work_stealing.hpp>
namespace bf = boost::fibers;
class GreenExecutor
{
std::thread worker;
bf::condition_variable_any cv;
bf::mutex mtx;
bf::barrier barrier;
public:
GreenExecutor() : barrier {2}
{
worker = std::thread([this] {
bf::use_scheduling_algorithm<bf::algo::work_stealing>(2);
// wait till all threads joining the work stealing have been registered
barrier.wait();
mtx.lock();
// suspend main-fiber from the worker thread
cv.wait(mtx);
mtx.unlock();
});
bf::use_scheduling_algorithm<bf::algo::work_stealing>(2);
// wait till all threads have been registered the scheduling algorithm
barrier.wait();
}
template<typename T>
void PostWork(T&& functor)
{
bf::fiber {std::move(functor)}.detach();
}
~GreenExecutor()
{
cv.notify_all();
worker.join();
}
};
int main()
{
GreenExecutor executor;
std::this_thread::sleep_for(std::chrono::seconds(1));
int i = 0;
for (auto j = 0ul; j < 10; ++j) {
executor.PostWork([idx {++i}]() {
auto res = pow(sqrt(sin(cos(tan(idx)))), M_1_PI);
std::cout << idx << " - " << res << std::endl;
});
}
while (true) {
boost::this_fiber::yield();
}
return 0;
}
Output
2 - 1 - -nan
0.503334 3 - 4 - 0.861055
0.971884 5 - 6 - 0.968536
-nan 7 - 8 - 0.921959
0.9580699
- 10 - 0.948075
0.961811
Ok, there were a couple of things I missed, first, my conclusion was based on misunderstanding of how stuff works in boost::fiber
The line in the constructor mentioned in the question
bf::use_scheduling_algorithm<bf::algo::work_stealing>(2);
was installing the scheduler in the thread where the GreenExecutor instance was created (in the main thread) so, when launching two worker fibers I was actually initiating two threads which are going to process submitted fibers which in turn would process these fibers asynchronously thus mixing the std::cout output. No magic, everything works as expected, the boost::fiber::yield still is the only option to pass control from one fiber to another
The producer/consumer problem in concurrency: a producer produces things and appends them to a buffer. A consumer takes things from the buffer. The consumer doesn't want to take things from an empty buffer and the producer doesn't want to append things to a full buffer.
William Stallings' "Operating Systems" gives the following example of a monitor used to solve the producer/consumer problem:
// Monitor
append(char x) {
if (count == N) cwait(notfull)
buffer[nextin] = x
nextin = (nextin + 1) % N
count++
csignal(nonempty)
}
take(char x) {
if (count == 0) cwait(notempty)
x = buffer[nextout]
nextout = (nextout + 1) % N
count--
csignal(notfull)
}
// Application using the monitor
producer() {
while (true) {
produce(x)
append(x)
}
}
consumer() {
while (true) {
take(x)
consume(x)
}
}
The book claims "only one process may be in the monitor at a time" [p.227]
How is this property enforced?
I can see how this would work with 1 consumer and 1 producer, but I fail to see how this protects - for example - 2 producers from simultaneously writing to a buffer.
There are several operations being done on drive G. My program should read data from file. When the disk usage is very high(>90%) the program should slow down the reading so it won't interfere with other processes that uses the disk. Obviously, I guess, that checking the Disk Time after calling get_data_from_file() will cause the counter to return very high percentage because the disk was just used. You can see that on the image.
Any suggestions on how I can check correctly the Disk Time?
PDH_HQUERY query;
PDH_HCOUNTER counter;
PdhOpenQuery(NULL, 0, &query);
PdhAddCounterA(query, "\\LogicalDisk(G:)\\% Disk Time", 0, &counter);
PdhCollectQueryData(query);
auto getDiskTime = [&]()->double
{
PDH_FMT_COUNTERVALUE fmtCounter;
PdhCollectQueryData(query);
PdhGetFormattedCounterValue(counter, PDH_FMT_DOUBLE, 0, &fmtCounter);
return fmtCounter.doubleValue;
};
for(...)
{
get_data_from_file();
print_done_percentage();
double diskUsage = getDiskTime();
if(diskUsage >= 90)
{
std::cout << "The disk usage is over << diskUsage << "%. I'll wait...
while(diskUsage >= 90)
{
diskUsage = getDiskTime();
Sleep(500);
}
}
}
A distinct monitoring thread could help you measure disk usage with more independence from the writing.
The function executed by the thread would look like this:
void diskmonitor(atomic<double>& du, const atomic<bool>& finished) {
while (!finished) { // stop looping as soon as main process has finished job
du = getDiskTime(); // measure disk
this_thread::sleep_for(chrono::milliseconds(500)); //wait
}
}
It communicates with the main thread through atomic (i.e. to avoid data races) variables passed by reference.
Your processing loop would look as follows:
atomic<bool> finished=false; // tell diskmonitor that the processing is ongoing
atomic<double> diskusage=0; // last disk usage read by diskmonitor
thread t(diskmonitor, ref(diskusage), ref(finished)); // launch monitor
for (int i = 0; i < 1000; i++)
{
...
print_done_percentage();
while (diskusage >= 90) { // disk usage is filled in background
std::cout << "The disk usage is over " << diskusage << ".I'll wait...\n";
this_thread::sleep_for(chrono::milliseconds(500));
}
...
}
finished = false; // tell diskmonitor that i't's finished, so that it ends the loop
t.join(); // wait until diskmonitor is finished.
This example is with standard C++ threads. Of course you could code something similar with OS specific threads.
I'm trying to set up some test software for code that is already written (that I cannot change). The issue I'm having is that it is getting hung up on certain calls, so I want to try to implement something that will kill the process if it does not complete in x seconds.
The two methods I've tried to solve this problem were to use fork or pthread, both haven't worked for me so far though. I'm not sure why pthread didn't work, I'm assuming it's because the static call I used to set up the thread had some issues with the memory needed to run the function I was calling (I continually got a segfault while the function I was testing was running). Fork worked initially, but on the second time I would fork a process, it wouldn't be able to check to see if the child had finished or not.
In terms of semi-pseudo code, this is what I've written
test_runner()
{
bool result;
testClass* myTestClass = new testClass();
pid_t pID = fork();
if(pID == 0) //Child
{
myTestClass->test_function(); //function in question being tested
}
else if(pID > 0) //Parent
{
int status;
sleep(5);
if(waitpid(0,&status,WNOHANG) == 0)
{
kill(pID,SIGKILL); //If child hasn't finished, kill process and fail test
result = false;
}
else
result = true;
}
}
This method worked for the initial test, but then when I would go to test a second function, the if(waitpid(0,&status,WNOHANG) == 0) would return that the child had finished, even when it had not.
The pthread method looked along these lines
bool result;
test_runner()
{
long thread = 1;
pthread_t* thread_handle = (pthread_t*) malloc (sizeof(pthread_t));
pthread_create(&thread_handle[thread], NULL, &funcTest, (void *)&thread); //Begin class that tests function in question
sleep(10);
if(pthread_cancel(thread_handle[thread] == 0))
//Child process got stuck, deal with accordingly
else
//Child process did not get stuck, deal with accordingly
}
static void* funcTest(void*)
{
result = false;
testClass* myTestClass = new testClass();
result = myTestClass->test_function();
}
Obviously there is a little more going on than what I've shown, I just wanted to put the general idea down. I guess what I'm looking for is if there is a better way to go about handling a problem like this, or maybe if someone sees any blatant issues with what I'm trying to do (I'm relatively new to C++). Like I mentioned, I'm not allowed to go into the code that I'm setting up the test software for, which prevents me from putting signal handlers in the function I'm testing. I can only call the function, and then deal with it from there.
If c++11 is legit you could utilize future with wait_for for this purpose.
For example (live demo):
std::future<int> future = std::async(std::launch::async, [](){
std::this_thread::sleep_for(std::chrono::seconds(3));
return 8;
});
std::future_status status = future.wait_for(std::chrono::seconds(5));
if (status == std::future_status::timeout) {
std::cout << "Timeout" <<endl ;
} else{
cout << "Success" <<endl ;
} // will print Success
std::future<int> future2 = std::async(std::launch::async, [](){
std::this_thread::sleep_for(std::chrono::seconds(3));
return 8;
});
std::future_status status2 = future2.wait_for(std::chrono::seconds(1));
if (status2 == std::future_status::timeout) {
std::cout << "Timeout" <<endl ;
} else{
cout << "Success" <<endl ;
} // will print Timeout
Another thing:
As per the documentation using waitpid with 0 :
meaning wait for any child process whose process group ID is equal to
that of the calling process.
Avoid using pthread_cancel it's probably not a good idea.