Parallel Processing in Node-RED - c++

This is a question related to Node-RED, but I'm asking here because I couldn't find an answer in the Node-RED forum. Maybe someone here can help me out.
I am importing a C++ addon into my Node-RED to send samples from my Analog Discovery 2 device into Node-RED using NAPI. The C++ code basically sends samples from the device to Node-RED using event emitter for each sample sent. The sampling rate is 500 samples/seconds. I can see in the Node-RED window the metric info that the function acutally sends messages upon receiving from the addon each second to Node-RED, but later on after all the samples has been sent (2mins) can see the debug node starting to receive those messages and showing them. I am interested in signal real-time analysis so that's why I would like that the messages get received by the debug node instantaneously upon sedning from the preceding function node and shown in the debug node (or sent to another node for further simultaneous processing)
I read about the Asynchronous behaviour of Node-RED and expected it to work the way I want. Is there a way to let this work simultaneously?
Thanks so much for any helpers.
const EventEmitter = require('events');
const test = require('C:/directory/build/Release/addon');
const emitter = new EventEmitter();
var a;
emitter.on('something',(evt)=>{ a=(evt);msg.payload=a;node.send(msg);node.done();})
test.Hello(emitter.emit.bind(emitter));
function.js
FDwfAnalogInStatusData(hdwf, 0, &rgdSamples[cSamples], cAvailable); //registering new available samples instantaneously into rgdSamples[]
for (int i = cSamples; i < cSamples + cAvailable; i++) {
napi_create_double(env, rgdSamples[i], &myNumber);
emit.Call({ Napi::String::New(env,"something"), myNumber });
}
addon.node

Related

What causing unreachable errors in Google PubSub?

i'm running application which consists of Google Cloud Functions, triggered by PubSub Topics, so basically they're communicating to each other via Google PubSub.
The problem is, it can struggle sometimes and show delays when publishing messages up to 9s or more. I checked the Metrics Explorer and found out that when high delays it shows next errors:
unreachable_5xx_error_500
unreachable_no_response
internal_rejected_error
unreachable_5xx_error_503
url_4xx_error_429
Here is the chart showing delays:
Code example of publishing message inside Google Cloud Function:
const {PubSub} = require('#google-cloud/pubsub');
const pubSubClient = new PubSub();
async function publishMessage() {
const topicName = 'my-topic';
const dataBuffer = Buffer.from(data);
const messageId = await pubSubClient.topic(topicName).publish(dataBuffer);
console.log(`Message ${messageId} published.`);
}
publishMessage().catch(console.error);
Code example of function triggered by PubSub Topic:
exports.subscribe = async (message) => {
const name = message.data
? Buffer.from(message.data, 'base64').toString()
: 'World';
console.log(`Hello, ${name}!`);
}
And i think this errors causing delays. I didn't find anything on this on the internet, so i hope you can explain what causing this errors and why and probably can help with this.
As it was discussed in the comments, there are some changes and workarounds that can be done to solve or reduce the problem.
At first, as can be found in this guide, PubSub tries to gather multiple messages before delivering it. In other words, it tries to delivery many messages at once. In this specific case to achieve a more realistic real time scenario, should be specified a batch size of 1, which would cause PubSub to delivery every message separately. This batch size can be specified using the maxMessages property in the publisher object creation like in the code below. Besides that, the maxMilliseconds property can be used to specify the maximum latency allowed.
const batchPublisher = pubSubClient.topic(topicName, {
batching: {
maxMessages: maxMessages,
maxMilliseconds: maxWaitTime * 1000,
},
});
In the discussion it was also noticed that the problem is probably related to the Cloud Function's cold-start which makes the latency bigger for this application due to its architecture. The workaround for solving this part of the problem was inserting a Node JS server in the architecture to trigger the functions using PubSub.

detect client connection closed in the grpc server

In the unary RPC example provided in the grpc Github (client) and (server), is there any way to detect client's closed connection?
For example, in server.cc file:
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
int p = 0,i=0;
while(i++ < 1000000000) { // some dummy work
p = p + 10;
}
responder_.Finish(reply_, Status::OK, this);
With this dummy task before sending the response back to the client, server will take a few seconds. If we close the client (for example say with Ctrl+C), the server does not throw any error. It simply calls Finish and then deallocates the object as if the Finish is successful.
Is there any async feature (handler function) on the server-side to get us notified that the client has closed the connection or client is terminated?
Thank You!
Unfortunately, no.
But now guys from gRPC team works hard to implement callback mechanism into C++ implementation. As I understand it will work the same way as on Java implementation( https://youtu.be/5tmPvSe7xXQ?t=1843 ).
You can see how to work with future API with next examples: client_callback.cc and server_callback.cc
And the point of your interest there is ServerBidiReactor class from ::grpc::experimental namespace for server side. It have OnDone and OnCancel notification methods that maybe can help you.
Another interesting point there is that you can store a pointers to connection object and send notifications to client at any time.
But it still have many issue and I don't recommend to use this API in production code.
Current progress of C++ callbacks implementation you can see there: https://github.com/grpc/grpc/projects/12#card-12554506

How to make sure last AMQP message is published successfully before closing connection?

I have multiple processes working together as a system. One of the processes acts as main process. When the system is shutting down, every process need to send a notification (via RabbitMQ) to the main process and then exit. The program is written in C++ and I am using AMQPCPP library.
The problem is that sometimes the notification is not published successfully. I suspect exiting too soon is the cause of the problem as AMQPCPP library has no chance to send the message out before closing its connection.
The documentation of AMQPCPP says:
Published messages are normally not confirmed by the server, and the RabbitMQ will not send a report back to inform you whether the message was succesfully published or not. Therefore the publish method does not return a Deferred object.
As long as no error is reported via the Channel::onError() method, you can safely assume that your messages were delivered.
This can of course be a problem when you are publishing many messages. If you get an error halfway through there is no way to know for sure how many messages made it to the broker and how many should be republished. If this is important, you can wrap the publish commands inside a transaction. In this case, if an error occurs, the transaction is automatically rolled back by RabbitMQ and none of the messages are actually published.
Without a confirmation from RabbitMQ server, it's hard to decide when it is safe to exit the process. Furthermore, using transaction sounds like overkill for a notification.
Could anyone suggest a simple solution for a graceful shutting down without losing the last notification?
It turns out that I can setup a callback when closing the channel. So that I can safely close connection when all channels are closed successfully. I am not entirely sure if this process ensures all outgoing messages are really published. However from the test result, it seems that the problem is solved.
class MyClass
{
...
AMQP::TcpConnection m_tcpConnection;
AMQP::TcpChannel m_channelA;
AMQP::TcpChannel m_channelB;
...
};
void MyClass::stop(void)
{
sendTerminateNotification();
int remainChannel = 2;
auto closeConnection = [&]() {
--remainChannel;
if (remainChannel == 0) {
// close connection when all channels are closed.
m_tcpConnection.close();
ev::get_default_loop().break_loop();
}
};
auto closeChannel = [&](AMQP::TcpChannel & channel) {
channel.close()
.onSuccess([&](void) { closeConnection(); })
.onError([&](const char * msg)
{
std::cout << "cannot close channel: "
<< msg << std::endl;
// close the connection anyway
closeConnection();
}
);
closeChannel(m_channelA);
closeChannel(m_channelB);
}

Message retry and Dead Letter Queue in WSO2 2.2.0 Message Broker

We are evaluating the WSO2 stack and in particular the Message Broker v 2.2.0 and are not able to make the message retry limit work.
According to this documentation page, once the client has rejected a message 10 times it will be removed from the queue and placed on the dead letter queue.
https://docs.wso2.com/display/MB220/Maximum+Delivery+Attempts
Out definition of rejection is either:
a) Not sending acknowledgement in the case of using Session.CLIENT_ACKNOWLEDGE or
b) Rolling back the transaction in the case of using a transacted session.
Using the WSO2 example client code we are unable to observe this behaviour using any combination of client acknowledgement modes or induced failures. The message remains active in the queue and can be taken from it any number of times. Acknowledging it or committing the session removes it from the queue as you would expect.
Can anyone confirm if this feature actually works and if so, show us what a client has to do to trigger it. We have been testing using the WSO2 provided sample client code and an unmodified out-of-the-box server config:
https://docs.wso2.com/display/MB220/Sending+and+Receiving+Messages+Using+Queues
Any help would be appreciated as we are unable to continue with WSO2 without understanding exactly how this aspect of the system works.
This feature is working as expected. In order to test that you need to do some modification to the provided receiver client in the sample code.
Add the given system property
Change the abknlowdgment mode to CLIENT_ACK
Get the message for 10 times without sending the ACK to server
With these changes you can cater your requirement.
Here I am posting the modified method in the QueueReceiver class
public void receiveMessages() throws NamingException, JMSException {
Properties properties = new Properties();
System.setProperty("AndesAckWaitTimeOut", "30000");
properties.put(Context.INITIAL_CONTEXT_FACTORY, QPID_ICF);
properties.put(CF_NAME_PREFIX + CF_NAME, getTCPConnectionURL(userName, password));
System.out.println("getTCPConnectionURL(userName,password) = " + getTCPConnectionURL(userName, password));
InitialContext ctx = new InitialContext(properties);
// Lookup connection factory
QueueConnectionFactory connFactory = (QueueConnectionFactory) ctx.lookup(CF_NAME);
QueueConnection queueConnection = connFactory.createQueueConnection();
queueConnection.start();
QueueSession queueSession =
queueConnection.createQueueSession(false, QueueSession.CLIENT_ACKNOWLEDGE);
//Receive message
Queue queue = queueSession.createQueue(queueName);
MessageConsumer queueReceiver = queueSession.createConsumer(queue);
int count =0;
while (count < 12) {
TextMessage message = (TextMessage) queueReceiver.receive();
System.out.println("Got message ==>" + message.getText());
count++;
}
queueReceiver.close();
queueSession.close();
queueConnection.stop();
queueConnection.close();
}
Please note that this modification is done for just proofing that the feature is working.

How to display remote email message?

I have been using this code to display IMAP4 messages:
void DisplayMessageL( const TMsvId &aId )
{
// 1. construct the client MTM
TMsvEntry indexEntry;
TMsvId serviceId;
User::LeaveIfError( iMsvSession->GetEntry(aId, serviceId, indexEntry));
CBaseMtm* mtm = iClientReg->NewMtmL(indexEntry.iMtm);
CleanupStack::PushL(mtm);
// 2. construct the user interface MTM
CBaseMtmUi* uiMtm = iUiReg->NewMtmUiL(*mtm);
CleanupStack::PushL(uiMtm);
// 3. display the message
uiMtm->BaseMtm().SwitchCurrentEntryL(indexEntry.Id());
CMsvOperationWait* waiter=CMsvOperationWait::NewLC();
waiter->Start(); //we use synchronous waiter
CMsvOperation* op = uiMtm->OpenL(waiter->iStatus);
CleanupStack::PushL(op);
CActiveScheduler::Start();
// 4. cleanup for example even members
CleanupStack::PopAndDestroy(4); // op,waiter, mtm, uimtm
}
However, in case when user attempts to download a remote message (i.e. one of the emails previously not retrieved from the mail server), and then cancels the request, my code remains blocked, and it never receives information that the action was canceled.
My question is:
what is the workaround for the above, so the application is not stuck?
can anyone provide a working example for asynchronous call for opening remote messages which do not panic and crash the application?
Asynchronous calls for POP3, SMTP and local IMAP4 messages work perfectly, but remote IMAP4 messages create this issue.
I am testing these examples for S60 5th edition.
Thank you all in advance.
First of all, I would retry removing CMsvOperationWait and deal with the open request asynchronously - i.e. have an active object waiting for the CMsvOperation to complete.
CMsvOperationWait is nothing more than a convenience to make an asynch operation appear synchronous and my suspicion is that this is culprit - in the case of download->show message, there are two asynch operations chained.