ROS Approximate Time sync not entering callback - c++

I am trying to subscribe to a pointcloud and it's normal from to different subscribers and pass it on to a common function via approximate time sync. Please find minimal code below:
void OrganizedMultiPlaneSegmentation::onInit() {
sub = private_nh.subscribe("/cloud/cloud_out_merged",1,&OrganizedMultiPlaneSegmentation::segment,this,ros::TransportHints().tcpNoDelay());
}
void OrganizedMultiPlaneSegmentation::segment(const sensor_msgs::PointCloud2& msg){
ros::NodeHandle& nh_ = getNodeHandle();
sub1_.subscribe(nh_, "/cloud/cloud_out_merged",5);
if (sub1_.getSubscriber().getNumPublishers() == 1)
{
// NODELET_INFO("Got a subscriber to scan, starting subscriber to merged pointcloud");
}
sub2_.subscribe(nh_, "/cloud_out_normal",5);
if (sub2_.getSubscriber().getNumPublishers() == 1)
{
// NODELET_INFO("Got a subscriber to scan, starting subscriber to normal pointcloud");
}
typedef sync_policies::ApproximateTime<sensor_msgs::PointCloud2,sensor_msgs::PointCloud2> MySyncPolicy;
Synchronizer<MySyncPolicy> sync(MySyncPolicy(40), sub1_, sub2_);
// sync.setAgePenalty(1.0);
sync.registerCallback(boost::bind(&OrganizedMultiPlaneSegmentation::segmentPlanes,this, _1, _2));
}
// Callback function to be called
void OrganizedMultiPlaneSegmentation::segmentPlanes(const PointCloud2ConstPtr &cloud_msg1,const PointCloud2ConstPtr &cloud_msg2)
{
pcl::PointCloud<pointT>::Ptr cloud(new pcl::PointCloud<pointT>());
pcl::fromROSMsg(*cloud_msg1,*cloud);
pcl::PointCloud<pcl::Normal>::Ptr normalCloud(new pcl::PointCloud<pcl::Normal>());
I have tried changing queue size of the subscriber, the buffer size of the policy, and both don't seem to work. Please let me know if there is something obvious that I am doing wrong,or what can be possible causes and remedies for this. Happy to provide more information and thank you!

I would try to play with slop parameter of ApproximateTimeSynchronizer. It is responsible for how big can be the difference between timestamps of the messages to consider them synchronized.
Also one the basic checks is to verify that messages on the topic /cloud/cloud_out_merged and /cloud_out_normal are being published at all :)

Related

Multi-threading in a sender-receiver model in C++

I am trying to build a sender-receiver model in C++. I want to send data from sender to receiver in the feed-forward way. Shortly, the desired architecture could be expressed like so (elementary units are called nodes):
Each node can receive (send) data from (to) another node. Each node has as many as desired senders (receivers). Each node has a number called impulse which is data supposed to be sent. Let's call one data propagation through a whole network a cycle. Then in one cycle each node that belongs to the network must send impulse once and only once to all its receivers.
A scratch of the implementation of the above idea is the following.
class Node
{
private :
double in_signal;//received data
double out_signal;//data to send
bool is_opened;//status of channel
void update_in( double package );//receive new package
protected :
std::vector<Node*> receivers;
public :
Node( double out_signal )
: in_signal(0.),
out_signal(out_signal),
is_opened(false)
{}
~Node() {}
void add_receiver( Node* receiver );
void emit();//send out_signal or impulse
};
void Node::update_in( double package )
{
//the problem is how to control the status of the channel:
//it must be closed after getting all data
if ( not is_opened )
{
in_signal = 0.;//reset
is_opened = true;
}
in_signal += package;
}
void Node::add_receiver( Node* receiver )
{
receivers.push_back( receiver );
}
void Node::emit()
{
for ( auto& receiver : receivers )
{
receiver->update_in( out_signal );
}
}
The problem I cannot solve is multi-threading behind this architecture:
The node's in_signal can be updated by several senders, but senders work independently (required) therefore I am afraid of arising concurrency.
So I am asking about
How to solve this multi-thereading problem?
How to decide if all data has been received?
I will be thankful for some ideas, patterns, conceptions etc.
First of all, IMHO, this is an oversimplification about your problem.
I think, for example, your data couldn't be simply double.
Anyway:
If in your architecture you have a race condition (if each node has it's own thread, you clearly have a race condition), i don't see any solution but mutex.
It depens on real task vastity and complexity, but you could apply some rule to let nodes be capable of understand lifetime of packets (some ideas: sequence number for packets, list of near nodes, etc.).
To protect a variable I recommend you use a test-and-set method, which is a simple mechanism for synchronizing access to a variable that is shared by multiple threads.
In Windows the most common way to do that is by using the InterlockedExchange function.
For linux just use this wrapper [1]:
template<typename T> T InterlockedExchange(T& data, T& new_val)
{
return __sync_lock_test_and_set(&data, new_val);
}

What's a proper way to use set_alert_notify to wake up main thread?

I'm trying to write my own torrent program based on libtorrent rasterbar and I'm having problems getting the alert mechanism working correctly. Libtorrent offers function
void set_alert_notify (boost::function<void()> const& fun);
which is supposed to
The intention of of the function is that the client wakes up its main thread, to poll for more alerts using pop_alerts(). If the notify function fails to do so, it won't be called again, until pop_alerts is called for some other reason.
so far so good, I think I understand the intention behind this function. However, my actual implementation doesn't work so good. My code so far is like this:
std::unique_lock<std::mutex> ul(_alert_m);
session.set_alert_notify([&]() { _alert_cv.notify_one(); });
while (!_alert_loop_should_stop) {
if (!session.wait_for_alert(std::chrono::seconds(0))) {
_alert_cv.wait(ul);
}
std::vector<libtorrent::alert*> alerts;
session.pop_alerts(&alerts);
for (auto alert : alerts) {
LTi_ << alert->message();
}
}
however there is a race condition. If wait_for_alert returns NULL (since no alerts yet) but the function passed to set_alert_notify is called before _alert_cw.wait(ul);, the whole loop waits forever (because of second sentence from the quote).
For the moment my solution is just changing _alert_cv.wait(ul); to _alert_cv.wait_for(ul, std::chrono::milliseconds(250)); which reduces number of loops per second enough while keeping latency low enough.
But it's really more workaround then solution and I keep thinking there must be proper way to handle this.
You need a variable to record the notification. It should be protected by the same mutex that owns the condition variable.
bool _alert_pending;
session.set_alert_notify([&]() {
std::lock_guard<std::mutex> lg(_alert_m);
_alert_pending = true;
_alert_cv.notify_one();
});
std::unique_lock<std::mutex> ul(_alert_m);
while(!_alert_loop_should_stop) {
_alert_cv.wait(ul, [&]() {
return _alert_pending || _alert_loop_should_stop;
})
if(_alert_pending) {
_alert_pending = false;
ul.unlock();
session.pop_alerts(...);
...
ul.lock();
}
}

How to use C++11 <thread> designing a system which pulls data from sources

This question comes from:
C++11 thread doesn't work with virtual member function
As suggested in a comment, my question in previous post may not the right one to ask, so here is the original question:
I want to make a capturing system, which will query a few sources in a constant/dynamic frequency (varies by sources, say 10 times / sec), and pull data to each's queues. while the sources are not fixed, they may add/remove during run time.
and there is a monitor which pulls from queues at a constant freq and display the data.
So what is the best design pattern or structure for this problem.
I'm trying to make a list for all the sources pullers, and each puller holds a thread, and a specified pulling function (somehow the pulling function may interact with the puller, say if the source is drain, it will ask to stop the pulling process on that thread.)
Unless the operation where you query a source is blocking (or you have lots of them), you don't need to use threads for this. We could start with a Producer which will work with either synchronous or asynchronous (threaded) dispatch:
template <typename OutputType>
class Producer
{
std::list<OutputType> output;
protected:
int poll_interval; // seconds? milliseconds?
virtual OutputType query() = 0;
public:
virtual ~Producer();
int next_poll_interval() const { return poll_interval; }
void poll() { output.push_back(this->query()); }
std::size_t size() { return output.size(); }
// whatever accessors you need for the queue here:
// pop_front, swap entire list, etc.
};
Now we can derive from this Producer and just implement the query method in each subtype. You can set poll_interval in the constructor and leave it alone, or change it on every call to query. There's your general producer component, with no dependency on the dispatch mechanism.
template <typename OutputType>
class ThreadDispatcher
{
Producer<OutputType> *producer;
bool shutdown;
std::thread thread;
static void loop(ThreadDispatcher *self)
{
Producer<OutputType> *producer = self->producer;
while (!self->shutdown)
{
producer->poll();
// some mechanism to pass the produced values back to the owner
auto delay = // assume millis for sake of argument
std::chrono::milliseconds(producer->next_poll_interval());
std::this_thread::sleep_for(delay);
}
}
public:
explicit ThreadDispatcher(Producer<OutputType> *p)
: producer(p), shutdown(false), thread(loop, this)
{
}
~ThreadDispatcher()
{
shutdown = true;
thread.join();
}
// again, the accessors you need for reading produced values go here
// Producer::output isn't synchronised, so you can't expose it directly
// to the calling thread
};
This is a quick sketch of a simple dispatcher that would run your producer in a thread, polling it however often you ask it to. Note that passing produced values back to the owner isn't shown, because I don't know how you want to access them.
Also note I haven't synchronized access to the shutdown flag - it should probably be atomic, but it might be implicitly synchronized by whatever you choose to do with the produced values.
With this organization, it'd also be easy to write a synchronous dispatcher to query multiple producers in a single thread, for example from a select/poll loop, or using something like Boost.Asio and a deadline timer per producer.

winsock, message oriented networking, and type-casting the buffer from recv

Okay, I actually don't have code as of yet because i'm just picking out a framework for the time being, but i'm still a little baffled about how i wish to go about this :.
Server side, i wish to have a class where each instance has a socket and various information identifying each connection. each object will have it's own thread for receiving data. I understand how i'll be implementing most of that, but my confusion starts just as i get to the actual transfer of data between server and client. I'll want to have a bunch of different message structs for specific cases, (for example CONNECT_MSG , DISCONNECT_MSG, POSTTEXT_MSG, etc) and then all i have to do is have a char * point at that struct and then pass it via the send() function.
But as i think on it, it gets a little complicated at that point. Any of those different message types could be sent, and on the receiving end, you will have no idea what you should cast the incoming buffer as. What i was hoping to do is, in the thread of each connection object, have it block until it receives a packet with a message, then dump it into a single queue object managed by the server(mutexes will prevent greediness) and then the server will process each message in FIFO order independent of the connection objects.
I havn't written anything yet, but let me write a little something to illustrate my setup.
#define CONNECT 1000
struct GENERIC_MESSAGE
{
int id;
}
struct CONNECT_MESSAGE : public GENERIC_MESSAGE
{
m_username;
}
void Connection::Thread()
{
while(1)
{
char buffer[MAX_BUFFER_SIZE]; // some constant(probably 2048)
recv(m_socket, buffer, MAX_BUFFER_SIZE, 0);
MESSAGE_GENERIC * msg = reinterpret_cast<MESSAGE_GENERIC *> (buffer);
server->queueMessage(msg);
}
}
void Server::QueueMessage(MESSAGE_GENERIC * msg)
{
messageQueue.push(msg);
}
void Server::Thread()
{
while(1)
{
if(!messageQueue.empty())
ProcessMessages();
else
Sleep(1);
}
}
void Server::ProcessMessages()
{
for(int i = 0; i < messageQueue.size(); i++)
{
switch(messageQueue.front()->id)
{
case CONNECT:
{
// the part i REALLY don't like
CONNECT_MESSAGE * msg = static_cast<CONNECT_MESSAGE *>(messageQueue.front() );
// do the rest of the processing on connect
break;
}
// other cases for the other message types
}
messageQueue.pop();
}
}
Now if you've been following up until now, you realize just how STUPID and fragile this is. it casts to the base class, passes that pointer to a queue, and then just assumes that the pointer is still valid from the other thread, and even then whether or not the remaining buffer after the pointer for the rest of the derived class will always be valid afterward for casting, but i have yet to find a correct way of doing this. I am wide open for ANY suggestions, either making this work, or an entirely different messaging design.
Before you write even a line of code, design the protocol that will be used on the wired. Decide what a message will consist of at the byte level. Decide who sends first, whether messages are acknowledged, how receivers identify message boundaries, and so on. Decide how the connection will be kept active (if it will be), which side will close first, and so on. Then write the code around the specification.
Do not tightly associate how you store things in memory with how you send things on the wire. These are two very different things with two very different sets of requirements.
Of course, feel free to adjust the protocol specification as you write the code.

Thread implemented as a Singleton

I have a commercial application made with C,C++/Qt on Linux platform. The app collects data from different sensors and displays them on GUI. Each of the protocol for interfacing with sensors is implemented using singleton pattern and threads from Qt QThreads class. All the protocols except one work fine. Each protocol's run function for thread has following structure:
void <ProtocolClassName>::run()
{
while(!mStop) //check whether screen is closed or not
{
mutex.lock()
while(!waitcondition.wait(&mutex,5))
{
if(mStop)
return;
}
//Code for receiving and processing incoming data
mutex.unlock();
} //end while
}
Hierarchy of GUI.
1.Login screen.
2. Screen of action.
When a user logs in from login screen, we enter the action screen where all data is displayed and all the thread's for different sensors start. They wait on mStop variable in idle time and when data arrives they jump to receiving and processing data. Incoming data for the problem protocol is 117 bytes. In the main GUI threads there are timers which when timeout, grab the running instance of protocol using
<ProtocolName>::instance() function
Check the update variable of singleton class if its true and display the data. When the data display is done they reset the update variable in singleton class to false. The problematic protocol has the update time of 1 sec, which is also the frame rate of protocol. When I comment out the display function it runs fine. But when display is activated the application hangs consistently after 6-7 hours. I have asked this question on many forums but haven't received any worthwhile suggestions. I Hope that here I will get some help. Also, I have read a lot of literature on Singleton, multithreading, and found that people always discourage the use of singletons especially in C++. But in my application I can think of no other design for implementation.
Thanks in advance
A Hapless programmer
I think singleton is not really what you are looking for. Consider this:
You have (lets say) two sensors, each with its own protocol (frame rate, for our purpose).
Now create "server" classes for each sensor instead of an explicit singleton. This way you can hide the details of how your sensors work:
class SensorServer {
protected:
int lastValueSensed;
QThread sensorProtocolThread;
public:
int getSensedValue() { return lastValueSensed; }
}
class Sensor1Server {
public:
Sensor1Server() {
sensorProtocolThread = new Sensor1ProtocolThread(&lastValueSensed);
sensorProtocolThread.start();
}
}
class Sensor1ProtocolThread : public QThread {
protected:
int* valueToUpdate;
const int TIMEOUT = 1000; // "framerate" of our sensor1
public:
Sensor1ProtocolThread( int* vtu ) {
this->valueToUpdate = vtu;
}
void run() {
int valueFromSensor;
// get value from the sensor into 'valueFromSensor'
*valueToUpdate = valueFromSensor;
sleep(TIMEOUT);
}
}
This way you can do away with having to implement a singleton.
Cheers,
jrh.
Just a drive-by analysis but this doesn't smell right.
If the application is "consistently" hanging after 6-7 hours are you sure it isn't a resource (e.g. memory) leak? Is there anything different about the implementation of the problematic protocol from the rest of them? Have you run the app through a memory checker, etc.?
Not sure it's the cause of what you're seeing, but you have a big fat synchronization bug in your code:
void <ProtocolClassName>::run()
{
while(!mStop) //check whether screen is closed or not
{
mutex.lock()
while(!waitcondition.wait(&mutex,5))
{
if(mStop)
return; // BUG: missing mutex.unlock()
}
//Code for receiving and processing incoming data
mutex.unlock();
} //end while
}
better:
void <ProtocolClassName>::run()
{
while(!mStop) //check whether screen is closed or not
{
const QMutexLocker locker( &mutex );
while(!waitcondition.wait(&mutex,5))
{
if(mStop)
return; // OK now
}
//Code for receiving and processing incoming data
} //end while
}