I am trying to parallelise a biological model in C++ with boost::mpi. It is my first attempt, and I am entirely new to the boost library (I have started from the Boost C++ Libraries book by Schaling). The model consists of grid cells and cohorts of individuals living within each grid cell. The classes are nested, such that a vector of Cohorts* belongs to a GridCell. The model runs for 1000 years, and at each time step, there is dispersal such that the cohorts of individuals move randomly between grid cells. I want to parallelise the content of the for loop, but not the loop itself as each time step depends on the state of the previous time.
I use world.send() and world.recv() to send the necessary information from one rank to another. Because sometimes there is nothing to send between ranks I use with mpi::status and world.iprobe() to make sure the code does not hang waiting for a message that was never sent (I followed this tutorial)
The first part of my code seems to work fine but I am having troubles with making sure all the sent messages have been received before moving on to the next step in the for loop. In fact, I noticed that some ranks move on to the following time step before the other ranks have had the time to send their messaages (or at least that what it looks like from the output)
I am not posting the code because it consists of several classes and it’s quite long. If interested the code is on github. I write here roughly the pseudocode. I hope this will be enough to understand the problem.
int main()
{
// initialise the GridCells and Cohorts living in them
//depending on the number of cores requested split the
//grid cells that are processed by each core evenly, and
//store the relevant grid cells in a vector of GridCell*
// start to loop through each time step
for (int k = 0; k < (burnIn+simTime); k++)
{
// calculate the survival and reproduction probabilities
// for each Cohort and the dispersal probability
// the dispersing Cohorts are sorted based on the rank of
// the destination and stored in multiple vector<Cohort*>
// I send the vector<Cohort*> with
world.send(…)
// the receiving rank gets the vector of Cohorts with:
mpi::status statuses[world.size()];
for(int st = 0; st < world.size(); st++)
{
....
if( world.iprobe(st, tagrec) )
statuses[st] = world.recv(st, tagrec, toreceive[st]);
//world.iprobe ensures that the code doesn't hang when there
// are no dispersers
}
// do some extra calculations here
//wait that all processes are received, and then the time step ends.
//This is the bit where I am stuck.
//I've seen examples with wait_all for the non-blocking isend/irecv,
// but I don't think it is applicable in my case.
//The problem is that I noticed that some ranks proceed to the next
//time step before all the other ranks have sent their messages.
}
}
I compile with
mpic++ -I/$HOME/boost_1_61_0/boost/mpi -std=c++11 -Llibdir \-lboost_mpi -lboost_serialization -lboost_locale -o out
and execute with mpirun -np 5 out, but I would like to be able to execute with a higher number of cores on an HPC cluster later on (the model will be run at the global scale, and the number of cells might depend on the grid cell size chosen by the user).
The compilers installed are g++ (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0, Open MPI: 2.1.1
The fact that you have nothing to send is an important piece of information in your scenario. You can not deduce that fact from only the absence of a message. The absence of a message only means nothing was sent yet.
Simply sending a zero-sized vector and skipping the probing is the easiest way out.
Otherwise you would probably have to change your approach radically or implement a very complex speculative execution / rollback mechanism.
Also note that the linked tutorial uses probe in a very different fashion.
Related
I am trying to control a robot using a template-based controller class written in c++. Essentially I have a UDP connection setup with the robot to receive the state of the robot and send new torque commands to the robot. I receive new observations at a higher frequency (say 2000Hz) and my controller takes about 1ms (1000Hz) to calculate new torque commands to send to the robot. The problem I am facing is that I don't want my main code to wait to send the old torque commands while my controller is still calculating new commands to send. From what I understand I can use Ubuntu with RT-Linux kernel, multi-thread the code so that my getTorques() method runs in a different thread, set priorities for the process, and use mutexes and locks to avoid data race between the 2 threads, but I was hoping to learn what the best strategies to write hard-realtime code for such a problem are.
// main.cpp
#include "CONTROLLER.h"
#include "llapi.h"
void main{
...
CONTROLLERclass obj;
...
double new_observation;
double u;
...
while(communicating){
get_newObs(new_observation); // Get new state of the robot (2000Hz)
obj.getTorques(new_observation, u); // Takes about 1ms to calculate new torques
send_newCommands(u); // Send the new torque commands to the robot
}
...
}
Thanks in advance!
Okay, so first of all, it sounds to me like you need to deal with the fact that you receive input at 2 KHz, but can only compute results at about 1 KHz.
Based on that, you're apparently going to have to discard roughly half the inputs, or else somehow (in a way that makes sense for your application) quickly combine the inputs that have arrived since the last time you processed the inputs.
But as the code is structured right now, you're going to fetch and process older and older inputs, so even though you're producing outputs at ~1 KHz, those outputs are constantly being based on older and older data.
For the moment, let's assume you want to receive inputs as fast as you can, and when you're ready to do so, you process the most recent input you've received, produce an output based on that input, and repeat.
In that case, you'd probably end up with something on this general order (using C++ threads and atomics for the moment):
std::atomic<double> new_observation;
std::thread receiver = [&] {
double d;
get_newObs(d);
new_observation = d;
};
std::thread sender = [&] {
auto input = new_observation;
auto u = get_torques(input);
send_newCommands(u);
};
I've assumed that you'll always receive input faster than you can consume it, so the processing thread can always process whatever input is waiting, without receiving anything to indicate that the input has been updated since it was last processed. If that's wrong, things get a little more complex, but I'm not going to try to deal with that right now, since it sounds like it's unnecessary.
As far as the code itself goes, the only thing that may not be obvious is that instead of passing a reference to new_input to either of the existing functions, I've read new_input into variable local to the thread, then passed a reference to that.
I have a question regarding buffering in between blocks in GNU Radio. I know that each block in GNU (including custom blocks) have buffers to store items that are going to be sent or received items. In my project, there is a certain sequence I have to maintain to synchronize events between blocks. I am using GNU radio on the Xilinx ZC706 FPGA platform with the FMCOMMS5.
In the GNU radio companion I created a custom block that controls a GPIO Output port on the board. In addition, I have an independent source block that is feeding information into the FMCOMMS GNU block. The sequence I am trying to maintain is that, in GNU radio, I first send data to the FMCOMMS block, second I want to make sure that the data got consumed by the FMCOMMS block (essentially by checking buffer), then finally I want to control the GPIO output.
From my observations, the source block buffer doesn’t seem to send the items until it’s full. This will cause a major issue in my project because this means that the GPIO data will be sent before or in parallel with sending the items to the other GNU blocks. That’s because I’m setting the GPIO value through direct access to its address in the ‘work’ function of my custom block.
I tried to use pc_output_buffers_full() in the ‘work’ function of my custom source in order to monitor the buffer, but I’m always getting 0.00. I’m not sure if it’s supposed to be used in custom blocks or if the ‘buffer’ in this case is something different from where the output items are stored. Here's a small code snippet which shows the problem:
char level_count = 0, level_val = 1;
vector<float> buff (1, 0.0000);
for(int i=0; i< noutput_items; i++)
{
if(level_count < 20 && i< noutput_items)
{
out[i] = gr_complex((float)level_val,0);
level_count++;
}
else if(i<noutput_items)
{
level_count = 0;
level_val ^=1;
out[i] = gr_complex((float)level_val,0);
}
buff = pc_output_buffers_full();
for (int n = 0; n < buff.size(); n++)
cout << fixed << setw(5) << setprecision(2) << setfill('0') << buff[n] << " ";
cout << "\n";
}
Is there a way to monitor the buffer so that I can determine when my first part of data bits have been sent? Or is there a way to make sure that the each single output item is being sent like a continuous stream to the next block(s)?
GNU Radio Companion version: 3.7.8
OS: Linaro 14.04 image running on the FPGA
Or is there a way to make sure that the each single output item is being sent like a continuous stream to the next block(s)?
Nope, that's not how GNU Radio works (at all!):
A while back I wrote an article that explains how GNU Radio deals with buffers, and what these actually are. While the in-memory architecture of GNU Radio buffers might be of lesser interest to you, let me quickly summarize the dynamics of it:
The buffers that (general_)work functions are called with behave for all that's practical like linearly addressable ring buffers. You get a random number of samples at once (restrictable to minimum numbers, multiples of numbers), and all that you not consume will be handed to you the next time work is called.
These buffers hence keep track of how much you've consumed, and thus, how much free space is in a buffer.
The input buffer a block sees is actually the output buffer of the "upstream" block in the flow graph.
GNU Radio's computation is backpressure-controlled: Any block's work method will immediately be called in an endless loop given that:
There's enough input for the block to do work,
There's enough output buffer space to write to.
Therefore, as soon as one block finishes its work call, the upstream block is informed that there's new free output space, thus typically leading to it running
That leads to high parallelity, since even adjacent blocks can run simultaneously without conflicting
This architecture favors large chunks of input items, especially for blocks that take a relative long time to computer: while the block is still working, its input buffer is already being filled with chunks of samples; when it's finished, chances are it's immediately called again with all the available input buffer being already filled with new samples.
This architecture is asynchronous: even if two blocks are "parallel" in your flow graph, there's no defined temporal relation between the numbers of items they produce.
I'm not even convinced switching GPIOs at times based on the speed computation in this completely non-deterministic timing data flow graph model is a good idea to start with. Maybe you'd rather want to calculate "timestamps" at which GPIOs should be switched, and send (timestamp, gpio state) command tuples to some entity in your FPGA that keeps absolute time? On the scale of radio propagation and high-rate signal processing, CPU timing is really inaccurate, and you should use the fact that you have an FPGA to actually implement deterministic timing, and use the software running on the CPU (i.e. GNU Radio) to determine when that should happen.
Is there a way to monitor the buffer so that I can determine when my first part of data bits have been sent?
Other than that, a method to asynchronously tell another another block that, yes, you've processed N samples, would be either to have a single block that just observes the outputs of both blocks that you want to synchronize and consumes an identical number of samples from both inputs, or to implement something using message passing. Again, my suspicion is that this is not a solution to your actual problem.
I am having trouble getting a custom block to operate at high frequency.
The block I would like to use is going to take in data from an external radio.
I am using an Ettus USRP block to stream data in from this radio, and I can display this on the QT Scope. I can set this block's sample rate to 15 MHz, and with the scope this seems to work ok.
Problem:
I have tried making a simple block with the gnuradio gr_modtool which takes in 2 floats as input and has 0 outputs. The block has private members "timer", a time_t, and "counter", an int. In the "work" function, my code simply does this at the moment:
const float *in_i = (const float *) input_items[0];
const float *in_q = (const float *) input_items[1];
if (count == 0){
if (*in_i > 0.5){
timer = clock();
count = 30000;
}
}else{
count --;
if(count == 0){
timer = clock()-timer;
printf("Count took %d clicks, or %f seconds\n",timer,(float)timer/CLOCKS_PER_SEC);
}
}
// Tell runtime system how many output items we produced.
return 0;
However, when I run this code, it takes longer than the expected time.
For 30000 cycles, it takes 0.872970 to complete, instead of the desired 0.002 seconds. Since the standard gnuradio block generated with gr_modtool is a sync block, and the input stream to the block is coming from the 15 MHz USRP, I would have expected this block to run at that same frequency. This is not currently the case.
Eventually my goal is to be able to store data streaming in over a period of time, and write it to file with certain formatting(A block already exists to do this, but there is some sort of bug that is preventing that block and the USRP block from working at the same time, so I am attempting to write my own.). However, unless I can keep up with the sample rate of 15 MHz, I will lose data. Since this block is fairly simple, I would have hoped it would be able to run quickly enough to keep up. However, the input stream block is able to pull data from the radio and output at 15 MHz, so I know my computer is capable of it.
How can I make this custom block operate more quickly, and keep up with the 15 MHz frequency?(Or, how can I make this sync block operate at the input stream frequency, since it currently does not)
Your block is not consuming any samples. I presume you're writing a sync_block (work function, not general_work), so your number of produced items is identical to the number of consumed items. But as your source code says:
// Tell runtime system how many output items we produced.
return 0;
In other words, your block tells GNU Radio that it didn't use any of the input GNU Radio offered, and produced no output. That means GNU Radio can't do nothing. You must return the number of items you've produced, and for sync blocks, that's the number of items you consumed – even if you're a sink, with zero output streams!
I have a somewhat simple multithreaded application written using the C++ std::thread library for both Ubuntu 14.04 and Windows 8.1. The code is nearly completely identical except that I'm using the operating system respective libraries windows.h and unistd.h to use Sleep/sleep to pause execution for a time. They both actually begin to run and the Ubuntu version does keep running for a short time but then hangs. I am using the proper arguments to the sleep/Sleep functions since I know Windows Sleep takes milliseconds, while Unix sleep takes seconds.
I've run the code multiple times and on Ubuntu it never makes it past two minutes whereas I've run it on windows twice for 20 minutes and then multiple times for roughly five minutes each to see if I was just lucky. Is this just an incompatibility with the thread library or does sleep not do what I think it does, or something else? The infinite loops are there because this is a school project and is expected to run without deadlocks or crashing.
The gist is that this is a modified 4-way stop where cars who arrive first don't have to slow down and stop. We only had to let one car through the intersection at a time which takes 3 seconds to cross, hence Sleep(3000), and don't have to worry about turns. Three threads run the spawnCars function and there are four other threads that each monitor one of the four directions N, E, S, and W. I hope that it's understandable why I can't post the entire code in the chance some other student stumbles upon this. These two functions are the only place where code is different aside from the operating system dependent library inclusion at the top. Thanks.
edit: Since I've just gone and posted all the code for the project, if the problem does end up being a deadlock, may I request that you only say so, and not post an in depth solution? I'm new here so if that's against the spirit of SO then fire away and I'll try to figure it out without reading the details.
/* function clearIntersection
Makes a car go through the intersection. The sleep comes before the removal from the queue
because my understanding is that the wait condition simulates the go signal for drivers.
It wouldn't make sense for the sensors to tell a car to go if the intersection isn't yet
clear even if the lock here would prevent that.
*/
void clearIntersection(int direction)
{
lock->lock();
Sleep(3000);
dequeue(direction);
lock->unlock();
}
/* function atFront(int direction)
Checks whether the car waiting at the intersection from a particular direction
has permission to pass, meaning it is at the front of the list of ALL waiting cars.
This is the waiting condition.
*/
bool isAtFront(int direction)
{
lock->lock();
bool isAtFront = cardinalDirections[direction].front() == list->front();
lock->unlock();
return isAtFront;
}
void waitInLine()
{
unique_lock<mutex> conditionLock(*lock);
waitForTurn->wait(conditionLock);
conditionLock.unlock();
}
//function broadcast(): Let all waiting threads know they can check whether or not their car can go.
void broadcast()
{
waitForTurn->notify_all();
}
};
/* function monitorDirection(intersectionQueue,int,int)
Threads will run this function. There are four threads that run this function
in total, one for each of the cardinal directions. The threads check to see
if the car at the front of the intersectionQueue, which contains the arrival order
of cars regardless of direction, is the car at the front of the queue for the
direction the thread is assigned to monitor. If not, it waits on a condition
variable until it is the case. It then calls the function to clear the intersection.
Broadcast is then used on the condition variable so all drivers will check if they
are allowed to pass, which one will unless there are 0 waiting cars, waiting again if not the case.
*/
void monitorDirection(intersectionQueue *intersection, int direction, int id)
{
while (true) //Do forever to see if crashes can occur.
{
//Do nothing if there are no cars coming from this direction.
//Possibly add more condition_variables for each direction?
if (!intersection->empty(direction))
{
while (!intersection->isAtFront(direction))
intersection->waitInLine();
intersection->clearIntersection(direction);
cout << "A car has gone " << numberToDirection(direction) << endl;
//All cars at the intersection will check the signal to see if it's time to go so broadcast is used.
intersection->broadcast();
}
}
}
Your culprit is likely your while (!isAtFront(...)) loop. If another thread gets scheduled between the check and the subsequent call to waitInLine(), the state of your queues could change, causing all of your consumer threads to end up waiting. At that point there's no thread to signal your condition_variable, so they will wait forever.
I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops.
Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest?
Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED':
void method_one()
{
std::vector<Data*> channel_cursors;
for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i)
{
Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment));
channel_cursors.push_back(current_item);
}
while(true)
{
for(std::size_t i = 0; i < channel_list.size(); ++i)
{
Data* current_item = channel_cursors[i];
ACQUIRE_MEMORY_BARRIER;
if(current_item->state_ == NOT_FILLED_YET) {
continue;
}
log_latency(current_item->tv_sec_, current_item->tv_usec_);
channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment));
}
}
}
Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried...
Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it.
void method_two()
{
std::vector<Data*> channel_cursors;
for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i)
{
Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment));
channel_cursors.push_back(current_item);
}
UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment));
while(true)
{
ACQUIRE_MEMORY_BARRIER;
if(update_cursor->state_ == NOT_FILLED_YET) {
continue;
}
::uint32_t update_id = update_cursor->list_id_;
Data* current_item = channel_cursors[update_id];
if(current_item->state_ == NOT_FILLED_YET) {
std::cerr << "This should never print." << std::endl; // it doesn't
continue;
}
log_latency(current_item->tv_sec_, current_item->tv_usec_);
channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment));
update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment));
}
}
Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor->state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach...
Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to.
void method_three()
{
std::vector<Data*> channel_cursors;
for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i)
{
Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment));
channel_cursors.push_back(current_item);
}
UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment));
while(true)
{
for(std::size_t i = 0; i < channel_list.size(); ++i)
{
std::size_t idx = i;
ACQUIRE_MEMORY_BARRIER;
if(update_cursor->state_ != NOT_FILLED_YET) {
//std::cerr << "Found via update" << std::endl;
i--;
idx = update_cursor->list_id_;
update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment));
}
Data* current_item = channel_cursors[idx];
ACQUIRE_MEMORY_BARRIER;
if(current_item->state_ == NOT_FILLED_YET) {
continue;
}
found_an_update = true;
log_latency(current_item->tv_sec_, current_item->tv_usec_);
channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment));
}
}
}
The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2.
The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/0kuzm3Uf
Update: All 3 methods are effectively spinlocking until an update occurs. The difference is in how long it takes them to notice the update has occurred. They all continuously tax the processor, so that doesn't explain the speed difference. I'm testing on a 4-core machine with nothing else running, so the server and the client have nothing to compete with. I've even made a version of the code where updates signal a condition and have clients wait on the condition -- it didn't help the latency of any of the methods.
Update2: Despite there being 3 methods, I've only tried 1 at a time, so only 1 server and 1 client are competing for the state_ member.
Hypothesis: Method 2 is somehow blocking the update from getting written by the server.
One of the things you can hammer, besides the processor cores themselves, is your coherent cache. When you read a value on a given core, the L1 cache on that core has to acquire read access to that cache line, which means it needs to invalidate the write access to that line that any other cache has. And vice versa to write a value. So this means that you're continually ping-ponging the cache line back and forth between a "write" state (on the server-core's cache) and a "read" state (in the caches of all the client cores).
The intricacies of x86 cache performance are not something I am entirely familiar with, but it seems entirely plausible (at least in theory) that what you're doing by having three different threads hammering this one memory location as hard as they can with read-access requests is approximately creating a denial-of-service attack on the server preventing it from writing to that cache line for a few milliseconds on occasion.
You may be able to do an experiment to detect this by looking at how long it takes for the server to actually write the value into the update list, and see if there's a delay there corresponding to the latency.
You might also be able to try an experiment of removing cache from the equation, by running everything on a single core so the client and server threads are pulling things out of the same L1 cache.
I don't know if you have ever read the Concurrency columns from Herb Sutter. They are quite interesting, especially when you get into the cache issues.
Indeed the Method2 seems better here because the id being smaller than the data in general would mean that you don't have to do round-trips to the main memory too often (which is taxing).
However, what can actually happen is that you have such a line of cache:
Line of cache = [ID1, ID2, ID3, ID4, ...]
^ ^
client server
Which then creates contention.
Here is Herb Sutter's article: Eliminate False Sharing. The basic idea is simply to artificially inflate your ID in the list so that it occupies one line of cache entirely.
Check out the other articles in the serie while you're at it. Perhaps you'll get some ideas. There's a nice lock-free circular buffer I think that could help for your update list :)
I've noticed in both method 1 and method 3 you have a line, ACQUIRE_MEMORY_BARRIER, which I assume has something to do with multi-threading/race conditions?
Either way, method 2 doesn't have any sleeps which means the following code...
while(true)
{
if(update_cursor->state_ == NOT_FILLED_YET) {
continue;
}
is going to hammer the processor. The typical way to do this kind of producer/consumer task is to use some kind of semaphore to signal to the reader that the update list has changed. A search for producer/consumer multi threading should give you a large number of examples. The main idea here is that this allows the thread to go to sleep while it's waiting for the update_cursor->state to change. This prevents this thread from stealing all the cpu cycles.
The answer was tricky to figure out, and to be fair would be hard with the information I presented though if anyone actually compiled the source code I provided they'd have a fighting chance ;) I said that "found via update list" was printed after every latency log message, but this wasn't actually true -- it was only true for as far as I could scrollback in my terminal. At the very beginning there were a slew of updates found without using the update list.
The issue is that between the time when I set my starting point in the update list and my starting point in each of the data lists, there is going to be some lag because these operations take time. Remember, the lists are growing the whole time this is going on. Consider the simplest case where I have 2 data lists, A and B. When I set my starting point in the update list there happen to be 60 elements in it, due to 30 updates on list A and 30 updates on list B. Say they've alternated:
A
B
A
B
A // and I start looking at the list here
B
But then after I set the update list to there, there are a slew of updates to B and no updates to A. Then I set my starting places in each of the data lists. My starting points for the data lists are going to be after that surge of updates, but my starting point in the update list is before that surge, so now I'm going to check for a bunch of updates without finding them. The mixed approach above works best because by iterating over all the elements when it can't find an update, it quickly closes the temporal gap between where the update list is and where the data lists are.