Why doesn't the value of instance variable persist? - c++

I have created a queue on the MP1Node class, and add to it from the recvCallBack method. My goal was to use this queue to send the messages after having figured out the member_list from the nodeLoopOps. However, the elements in this queue, let's call it msgQ, are getting lost as soon as checkMessages returns. I don't understand why this is happening. Is checkMessages() being executed in a new instance of the class? Why wouldn't msgQ persist and how can I make it persist?
void MP1Node::nodeLoop() {
// Check my messages
checkMessages();
// NOTE: msgQ size == 0 here
return;
}
void MP1Node::checkMessages() {
void *ptr;
int size;
...
recvCallBack((char *)ptr, size);
...
// NOTE: msgQ size == 1 here
return;
}
bool MP1Node::recvCallBack(char *data, int size ) {
...
scheduleMessage(newMsg);
...
}
void MP1Node::scheduleMessage(Message m){
msgQ.emplace(m);
}
class MP1Node {
private:
queue<Message> msgQ;
}

It's difficult to tell for sure from the skeleton code provided.
But this part is a bit suspicious:
The queue is defined to hold objects of type Message. newMsg appears to be a local variable created in method recvCallBack(). scheduleMessage() is called with that Message instance at which point the message object is enqueued. However, because the Message instance newMsg has local scope, it goes out of scope when recvCallBack() returns.
At this point I may expect the queue to contain garbage, but perhaps instead it's exhibiting as empty.

Related

Using member shared_ptr from a member callback function running in different thread (ROS topic subscription)

I am not completely sure how to best title this question since I am not completely sure what the nature of the problem actually is (I guess "how fix segfault" is not a good title).
The situation is, I have written this code:
template <typename T> class LatchedSubscriber {
private:
ros::Subscriber sub;
std::shared_ptr<T> last_received_msg;
std::shared_ptr<std::mutex> mutex;
int test;
void callback(T msg) {
std::shared_ptr<std::mutex> thread_local_mutex = mutex;
std::shared_ptr<T> thread_local_msg = last_received_msg;
if (!thread_local_mutex) {
ROS_INFO("Mutex pointer is null in callback");
}
if (!thread_local_msg) {
ROS_INFO("lrm: pointer is null in callback");
}
ROS_INFO("Test is %d", test);
std::lock_guard<std::mutex> guard(*thread_local_mutex);
*thread_local_msg = msg;
}
public:
LatchedSubscriber() {
last_received_msg = std::make_shared<T>();
mutex = std::make_shared<std::mutex>();
test = 42;
if (!mutex) {
ROS_INFO("Mutex pointer is null in constructor");
}
else {
ROS_INFO("Mutex pointer is not null in constructor");
}
}
void start(ros::NodeHandle &nh, const std::string &topic) {
sub = nh.subscribe(topic, 1000, &LatchedSubscriber<T>::callback, this);
}
T get_last_msg() {
std::lock_guard<std::mutex> guard(*mutex);
return *last_received_msg;
}
};
Essentially what it is doing is subscribing to a topic (channel), meaning that a callback function is called each time a message arrives. The job of this class is to store the last received message so the user of the class can always access it.
In the constructor I allocate a shared_ptr to the message and for a mutex to synchronize access to this message. The reason for using heap memory here is so the LatchedSubscriber can be copied and the same latched message can still be read. (the Subscriber already implements this kind of behavior where copying it doesn't do anything except for the fact that the callback stops being called once the last instance goes out of scope).
The problem is basically that the code segfaults. I am pretty sure the reason for this is that my shared pointers become null in the callback function, despite not being null in the constructor.
The ROS_INFO calls print:
Mutex pointer is not null in constructor
Mutex pointer is null in callback
lrm: pointer is null in callback
Test is 42
I don't understand how this can happen. I guess I have either misunderstood something about shared pointers, ros topic subscriptions, or both.
Things I have done:
At first I had the subscribe call happening in the constructor. I think giving the this pointer to another thread before the constructor has returned can be bad, so I moved this into a start function which is called after the object has been constructed.
There are many aspects to the thread safety of shared_ptrs it seems. At first I used mutex and last_received_msg directly in the callback. Now I have copied them into local variables hoping this would help. But it doesn't seem to make a difference.
I have added a local integer variable. I can read the integer I assigned to this variable in the constructor from the callback. Just a sanity check to make sure that the callback is actually called on an instance created by my constructor.
I think I have figured out the problem.
When subscribing I am passing the this pointer to the subscribe function along with the callback. If the LatchedSubscriber is ever copied and the original deleted, that this pointer becomes invalid, but the sub still exists so the callback keeps being called.
I didn't think this happened anywhere in my code, but the LatcedSubscriber was stored as a member inside an object which was owned by a unique pointer. It looks like make_unique might be doing some copying internally? In any case it is wrong to use the this pointer for the callback.
I ended up doing the following instead
void start(ros::NodeHandle &nh, const std::string &topic) {
auto l_mutex = mutex;
auto l_last_received_msg = last_received_msg;
boost::function<void(const T)> callback =
[l_mutex, l_last_received_msg](const T msg) {
std::lock_guard<std::mutex> guard(*l_mutex);
*l_last_received_msg = msg;
};
sub = nh.subscribe<T>(topic, 1000, callback);
}
This way copies of the two smart pointers are used with the callback instead.
Assigning the closure to a variable of type boost::function<void(const T)> seems to be necessary. Probably due to the way the subscribe function is.
This appears to have fixed the issue. I might also move the subscription into the constructor again and get rid of the start method.

Readable node stream to native c++ addon InputStream

Conceptually what I'm trying to do is very simple. I have a Readable stream in node, and I'm passing that to a native c++ addon where I want to connect that to an IInputStream.
The native library that I'm using works like many c++ (or Java) streaming interfaces that I've seen. The library provides an IInputStream interface (technically an abstract class), which I inherit from and override the virtual functions. Looks like this:
class JsReadable2InputStream : public IInputStream {
public:
// Constructor takes a js v8 object, makes a stream out of it
JsReadable2InputStream(const v8::Local<v8::Object>& streamObj);
~JsReadable2InputStream();
/**
* Blocking read. Blocks until the requested amount of data has been read. However,
* if the stream reaches its end before the requested amount of bytes has been read
* it returns the number of bytes read thus far.
*
* #param begin memory into which read data is copied
* #param byteCount the requested number of bytes
* #return the number of bytes actually read. Is less than bytesCount iff
* end of stream has been reached.
*/
virtual int read(char* begin, const int byteCount) override;
virtual int available() const override;
virtual bool isActive() const override;
virtual void close() override;
private:
Nan::Persistent<v8::Object> _stream;
bool _active;
JsEventLoopSync _evtLoop;
};
Of these functions, the important one here is read. The native library will call this function when it wants more data, and the function must block until it is able to return the requested data (or the stream ends). Here's my implementation of read:
int JsReadable2InputStream::read(char* begin, const int byteCount) {
if (!this->_active) { return 0; }
int read = -1;
while (read < 0 && this->_active) {
this->_evtLoop.invoke(
(voidLambda)[this,&read,begin,byteCount](){
v8::Local<v8::Object> stream = Nan::New(this->_stream);
const v8::Local<v8::Function> readFn = Nan::To<v8::Function>(Nan::Get(stream, JS_STR("read")).ToLocalChecked()).ToLocalChecked();
v8::Local<v8::Value> argv[] = { Nan::New<v8::Number>(byteCount) };
v8::Local<v8::Value> result = Nan::Call(readFn, stream, 1, argv).ToLocalChecked();
if (result->IsNull()) {
// Somewhat hacky/brittle way to check if stream has ended, but it's the only option
v8::Local<v8::Object> readableState = Nan::To<v8::Object>(Nan::Get(stream, JS_STR("_readableState")).ToLocalChecked()).ToLocalChecked();
if (Nan::To<bool>(Nan::Get(readableState, JS_STR("ended")).ToLocalChecked()).ToChecked()) {
// End of stream, all data has been read
this->_active = false;
read = 0;
return;
}
// Not enough data available, but stream is still open.
// Set a flag for the c++ thread to go to sleep
// This is the case that it gets stuck in
read = -1;
return;
}
v8::Local<v8::Object> bufferObj = Nan::To<v8::Object>(result).ToLocalChecked();
int len = Nan::To<int32_t>(Nan::Get(bufferObj, JS_STR("length")).ToLocalChecked()).ToChecked();
char* buffer = node::Buffer::Data(bufferObj);
if (len < byteCount) {
this->_active = false;
}
// copy the data out of the buffer
if (len > 0) {
std::memcpy(begin, buffer, len);
}
read = len;
}
);
if (read < 0) {
// Give js a chance to read more data
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
}
return read;
}
The idea is, the c++ code keeps a reference to the node stream object. When the native code wants to read, it has to synchronize with the node event loop, then attempt to invoke read on the node stream. If the node stream returns null, this indicates that the data isn't ready, so the native thread sleeps, giving the node event loop thread a chance to run and fill its buffers.
This solution works perfectly for a single stream, or even 2 or 3 streams running in parallel. Then for some reason when I hit the magical number of 4+ parallel streams, this totally deadlocks. None of the streams can successfully read any bytes at all. The above while loop runs infinitely, with the call into the node stream returning null every time.
It is behaving as though node is getting starved, and the streams never get a chance to populate with data. However, I've tried adjusting the sleep duration (to much larger values, and randomized values) and that had no effect. It is also clear that the event loop continues to run, since my lambda function continues to get executed there (I put some printfs inside to confirm this).
Just in case it might be relevant (I don't think it is), I'm also including my implementation of JsEventLoopSync. This uses libuv to schedule a lambda to be executed on the node event loop. It is designed such that only one can be scheduled at a time, and other invocations must wait until the first completes.
#include <nan.h>
#include <functional>
// simplified type declarations for the lambda functions
using voidLambda = std::function<void ()>;
// Synchronize with the node v8 event loop. Invokes a lambda function on the event loop, where access to js objects is safe.
// Blocks execution of the invoking thread until execution of the lambda completes.
class JsEventLoopSync {
public:
JsEventLoopSync() : _destroyed(false) {
// register on the default (same as node) event loop, so that we can execute callbacks in that context
// This takes a function pointer, which only works with a static function
this->_handles = new async_handles_t();
this->_handles->inst = this;
uv_async_init(uv_default_loop(), &this->_handles->async, JsEventLoopSync::_processUvCb);
// mechanism for passing this instance through to the native uv callback
this->_handles->async.data = this->_handles;
// mutex has to be initialized
uv_mutex_init(&this->_handles->mutex);
uv_cond_init(&this->_handles->cond);
}
~JsEventLoopSync() {
uv_mutex_lock(&this->_handles->mutex);
// prevent access to deleted instance by callback
this->_destroyed = true;
uv_mutex_unlock(&this->_handles->mutex);
// NOTE: Important, this->_handles must be a dynamically allocated pointer because uv_close() is
// async, and still has a reference to it. If it were statically allocated as a class member, this
// destructor would free the memory before uv_close was done with it (leading to asserts in libuv)
uv_close(reinterpret_cast<uv_handle_t*>(&this->_handles->async), JsEventLoopSync::_asyncClose);
}
// called from the native code to invoke the function
void invoke(const voidLambda& fn) {
if (v8::Isolate::GetCurrent() != NULL) {
// Already on the event loop, process now
return fn();
}
// Need to sync with the event loop
uv_mutex_lock(&this->_handles->mutex);
if (this->_destroyed) { return; }
this->_fn = fn;
// this will invoke processUvCb, on the node event loop
uv_async_send(&this->_handles->async);
// wait for it to complete processing
uv_cond_wait(&this->_handles->cond, &this->_handles->mutex);
uv_mutex_unlock(&this->_handles->mutex);
}
private:
// pulls data out of uv's void* to call the instance method
static void _processUvCb(uv_async_t* handle) {
if (handle->data == NULL) { return; }
auto handles = static_cast<async_handles_t*>(handle->data);
handles->inst->_process();
}
inline static void _asyncClose(uv_handle_t* handle) {
auto handles = static_cast<async_handles_t*>(handle->data);
handle->data = NULL;
uv_mutex_destroy(&handles->mutex);
uv_cond_destroy(&handles->cond);
delete handles;
}
// Creates the js arguments (populated by invoking the lambda), then invokes the js function
// Invokes resultLambda on the result
// Must be run on the node event loop!
void _process() {
if (v8::Isolate::GetCurrent() == NULL) {
// This is unexpected!
throw std::logic_error("Unable to sync with node event loop for callback!");
}
uv_mutex_lock(&this->_handles->mutex);
if (this->_destroyed) { return; }
Nan::HandleScope scope; // looks unused, but this is very important
// invoke the lambda
this->_fn();
// signal that we're done
uv_cond_signal(&this->_handles->cond);
uv_mutex_unlock(&this->_handles->mutex);
}
typedef struct async_handles {
uv_mutex_t mutex;
uv_cond_t cond;
uv_async_t async;
JsEventLoopSync* inst;
} async_handles_t;
async_handles_t* _handles;
voidLambda _fn;
bool _destroyed;
};
So, what am I missing? Is there a better way to wait for the node thread to get a chance to run? Is there a totally different design pattern that would work better? Does node have some upper limit on the number of streams that it can process at once?
As it turns out, the problems that I was seeing were actually client-side limitations. Browsers (and seemingly also node) have a limit on the number of open TCP connections to the same origin. I worked around this by spawning multiple node processes to do my testing.
If anyone is trying to do something similar, the code I shared is totally viable. If I ever have some free time, I might make it into a library.

Is there any object that I can use as FIFO with event for poping data out?

I need to have a FIFO object that when it has an element, it generate an event or make a call back to inform that an element is available. As far as I can see, std:queue doesn't support this.
In my case, I have two threads that one thread generate data and other thread needs to consume them.
The speed that first thread generate data is not fixed and hence I need to have a buffer that store the data so the other thread can read and process them in a relatively constant manner.
I know how to implement the writer, but in the reader side, If I poll the queue, then I am loosing some processing power to check the queue status and I am wondering if there is any better way of doing this?
Edit 1
It is not about thread safety of queue, but std::queue is working based on polling, but I need something that is event based. std::queue is not event based and will not make a call back when a new data is available.
If I understood your question right, you may very well use C++ std::condition_variables for notification between threads about the availability of item in a queue and then call the callback.
The code will look something like this. Here the Main Thread acts as a generator thread and Receive Thread acts as a consumer thread
std::condition_variable Cv_;
std::mutex Mutex_;
std::queue<int> qVal;
void callback() {
cout << "Callback Called with queue Value =>" << qVal.front() << endl;
qVal.pop();
}
void ReceiveThread() {
while (true) {
std::unique_lock<mutex> Lock(Mutex_);
Cv_.wait(Lock);
callback();
}
}
int main() {
thread thrd(ReceiveThread);
int pushVal = 1;
while (1) {
this_thread::sleep_for(std::chrono::seconds(1));
qVal.push(pushVal);
cout << "Signalling for callback with value = " << pushVal<< endl;
pushVal++;
Cv_.notify_all();
}
}
I haven't added any exit condition in the while loop which you might want to have.
Hope this helps.
The std::queue::push() function do not have any placeholder inside it such that we can simple put our call_back function reference in it so that after successfully inserting element inside queue the call_back function would get invoked.
std::queue::push(data)
{
//add data to internal container
//placeholder section--------
//invoke call_back function or event.
//placeholder section--------
}
So in the absence of such placeholder, we can try automatic invocation of call_back function or some event using RAII.
Suppose we wrap our actual data inside a struct which will help us in notifications. Then, we will be required to indirectly access the actual data via objects of this struct.
struct data_notifier
{
//the true data.
int actual_data;
data_notifier(int data) : actual_data(data)
{
//signal event queue_full
//or
//call a call_back function.
}
}
int actual_data = 90;
std::queue<data_notifier*> q;
q.push(new data_notifier(actual_data));
Now, only problem is: before an instance of data_notifier gets properly inserted in queue as a reference/pointer, our call_back or event would get invoked. So having the event got invoked, reader will try to read the data but will not get the data from the queue simply because data is not yet persisted inside the queue.
So, such guarantee of data properly getting persisted inside queue is only possible after the function std::queue::push() returns which can and will happen inside Writer function.
//event_full is a manual event which needs to be signalled and non-signalled manually.
void Writer()
{
while(1)
{
//[1] wait for mutex_queue
//[2] myqueue.push(data);
//[3] data is now persisted so signal event_full
//[4] release mutex_queue
}
}
void Reader()
{
while(1)
{
//[1] wait for event_full (so no polling)
//[2] wait for mutex_queue
//[3] --- access queue ---
if(myqueue.size() != 0)
{
//process myqueue.front()
//myqueue.pop();
}
if(myqueue.size() == 0)
{
//reset event_full
//so as long as queue has data, Reader can process it else needs to wait.
}
//[3] --- access queue ---
//[4] release mutex_queue
}
}

Unable to receive a message using message_queue in Boost thread

I have a requirement for creating a Event based Multi-thread application for which i am trying to use boost::thread and boost/interprocess/ipc/message_queue for sending messages between threads.
What i am doing currently is making the thread wait in its workerfunction to wait for a message.
Actually this is just for basic start where the sender and receiver both is a same thread, on later stage i have thought to store a list of message_queue corresponding for each thread and then fetch it accordingly or something like that.
But now, as per the code below i am using
//in a common class
typedef struct s_Request{
int id;
}st_Request;
//in thread(XYZ) class
st_Request dataone;
message_queue *mq;
void XYZ::threadfunc(void *ptr)
{
XYZ*obj = (XYZ*) ptr;
obj->RecieveMsg();
}
void XYZ::RecieveMsg()
{
message_queue mq1(open_only,"message_queue");
if(!(mq1.try_receive(&dataone, sizeof(st_Request), recvd_size, priority)))
printf("msg not received");
printf("id = %d",dataone.id);
}
void XYZ::Create()
{
mq= new message_queue(open_or_create,"message_queue",100,sizeof(st_Request));
boost:thread workerthread(threadfunc,this);
workerthread.join();
}
void XYZ::Send(st_Request *data)
{
if (!(mq->try_send(data, sizeof(st_Request), 0)))
printf("message sending failed");
}
//I am calling it like
class ABC: public XYZ
{
..some functions to do stuff... };
void ABC::createMSGQ()
{
create();
st_Request *data;
data->id =10;
send(data);
}
My thread is waiting in RecieveMsg but i am not getting any msg and the prints are coming till Send function entry and than the code crash.
Please Guide me for what i am doing wrong, if the approach is entirely wrong, i am open to move to new approach.
P.s. this is my first question on stack overflow i tried follow the guidelines still if i strayed away anywhere please do correct.
st_Request *data;
data->id =10;
data is uninitialized, you cannot dereference it. Pointers should point to something before you dereference them.
I don't understand the point of this function:
void XYZ::Create()
{
mq= new message_queue(open_or_create,"message_queue",100,sizeof(st_Request));
boost:thread workerthread(threadfunc,this);
workerthread.join();
}
You create a new thread, then block and wait for it to finish so you can join it. Why not just do the work here, instead of creating a new thread and waiting for it to finish?
What is threadfunc? Do you mean ThreadFunc?
This function is written strangely:
void XYZ::ThreadFunc(void *ptr)
{
XYZ*obj = (XYZ*) ptr;
obj->RecieveMsg();
}
Why not pass the argument as XYZ* instead of void*? Boost.Thread doesn't require everything to be passed as void*. Is that function static? It doesn't need to be:
struct XYZ {
void threadFunc();
void create();
void recv();
};
void XYZ::threadFunc()
{
recv();
}
void XYZ::create()
{
boost::thread thr(&XYZ::threadFunc, this);
thr.join();
}

Pointers to member functions within the same class (C++)?

I am writing a program to control a auto home brewing system on an Arduino Mega micro controller (written in C/C++). In short, what the program is doing is there is a C# application which periodically sends messages through USB to the micro controller. There is then a messaging interface which I wrote which reads the message, and forwards it to whichever component the message is for. Each message is 16 bytes long, the first 4 is a transaction code, and the last 12 is for data. Now, I read in the message and forward to it to my StateController class. It comes in from the InboundMessage function. What I am trying to do is I have a struct (defined in StateController.h) which contains the transaction code and pointer to a member function within StateController. I defined a QueueList (just a simple linked list library), and pushed a bunch of these structs into it. What I would like to do is then when a message comes into the inboundMessage function, i would like to loop through the linked list until I find a transaction code which matches, and then call the member function which is for that message, passing it the data in the message.
I think I have everything initialized correctly, but here is the problem. When I try and compile I get an error saying "func does not exist in this scope". I have looked all over for a solution to this, but can not find one. My codes is below
StateController.cpp
StateController::StateController(){
currentState = Idle;
prevState = Idle;
lastRunState = Idle;
txnTable.push((txnRow){MSG_BURN, &StateController::BURNprocessor});
txnTable.push((txnRow){MSG_MANE, &StateController::MANEprocessor});
txnTable.push((txnRow){MSG_MAND, &StateController::MANDprocessor});
txnTable.push((txnRow){MSG_PUMP, &StateController::PUMPprocessor});
txnTable.push((txnRow){MSG_STAT, &StateController::STATprocessor});
txnTable.push((txnRow){MSG_SYNC, &StateController::SYNCprocessor});
txnTable.push((txnRow){MSG_VALV, &StateController::VALVprocessor});
}
void StateController::inboundMessage(GenericMessage msg){
// Read transaction code and do what needs to be done for it
for (int x = 0; x < txnTable.count(); x++)
{
if (compareCharArr(msg.code, txnTable[x].code, TXN_CODE_LEN) == true)
{
(txnTable[x].*func)(msg.data);
break;
}
}
}
StateController.h
class StateController{
// Public functions
public:
// Constructor
StateController();
// State Controller message handeler
void inboundMessage(GenericMessage msg);
// Main state machine
void doWork();
// Private Members
private:
// Hardware interface
HardwareInterface hardwareIntf;
// Current state holder
StateControllerStates currentState;
// Preveous State
StateControllerStates prevState;
// Last run state
StateControllerStates lastRunState;
// BURN Message Processor
void BURNprocessor(char data[]);
// MANE Message Processor
void MANEprocessor(char data[]);
// MAND Message Processor
void MANDprocessor(char data[]);
// PUMP Message Processor
void PUMPprocessor(char data[]);
//STAT Message Processor
void STATprocessor(char data[]);
// SYNC Message Processor
void SYNCprocessor(char data[]);
// VALV Message Processor
void VALVprocessor(char data[]);
void primePumps();
// Check the value of two sensors given the window
int checkSensorWindow(int newSensor, int prevSensor, int window);
struct txnRow{
char code[TXN_CODE_LEN + 1];
void (StateController::*func)(char[]);
};
QueueList<txnRow> txnTable;
};
Any idea what is wrong?
func is just a normal member of txnRow so you access it with ., not .*, e.g. txnTable[x].func.
To call this member function on, say, this, you would do something like:
(this->*(txnTable[x].func))(msg.data);