Invoke javascript callback repeatedly from C++ module in node.js - c++

So I am writing a node.js module in C++ which processes data from a camera. I want to invoke a callback function in my app.js file whenever new data is available.
What I have at the moment
Right now I have in my node javascript file (app.js) the following code. Every second it calls a function in my C++-Module and returns the number of frames that have been processed so far:
setInterval(function () {
var counter = MyCPPModule.NumberOfFrames();
console.log(counter);
}, 1000);
In my C++ file I have the following functions.
1.) I have a function to get the number of frames - the function that gets called from javascript as above- , where frameCounter refers to a global variable.
void NumberOfFrames(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = Isolate::GetCurrent();
HandleScope scope(isolate);
Local<Integer> retval = v8::Integer::New(isolate, frameCounter);
args.GetReturnValue().Set(retval);
}
2.) And I have a C++-callback function in the C++ module that gets triggered whenever a new frame is available (just to be clear, this callback is independent of node.js and has nothing directly to do with node.js-callbacks).
void NewFrameAvailable(char** imageBuffer, /* more arguments */)
{
// ...
frameCounter++; // increment global variable
// ...
}
All of this works fine.
What I would like to accomplish
Instead of registering a function with setInterval in my javascript code. I would like to somehow register a function with my C++ module that gets called repeatedly whenever a new frame is available from the camera. It should behave like setInterval, but instead of being triggered every second, it gets triggered when a frame is available.
The code that I am hoping for on the javascript-side would be something like:
MyCPPModule.setMyFrameInterval(function (msg) {
console.log(msg);
});
Inside the C++ Module I would like to do something like:
void NewFrameAvailable(char** imageBuffer, /* more arguments */)
{
// ...
frameCounter++; // increment global variable
Local<Function> cb = /* WHAT DO I DO HERE*/ ;
Isolate*isolate = /* WHAT DO I DO HERE */;
const unsigned argc = 1;
Local<Value> argv[argc] = { String::NewFromUtf8(isolate, std::to_string(frameCounter)) };
cb->Call(Null(isolate), argc, argv);
// ...
}
and a function that registers the javascipt-callback with setMyFrameInterval:
void setMyFrameInterval(const FunctionCallbackInfo<Value>& args) {
Local<Function> cb = Local<Function>::Cast(args[0]); // make this somehow global?
Isolate*isolate = args.GetIsolate()
//...
}
So, how can I invoke the javascript callback from NewFrameAvailable (the C++ callback function which gets triggered when frames are available). I think I basically have to make the javascript function somehow globally available in the setMyFrameInterval function so that it is also known to the newFrameAvailable function. How do I do this?

Related

IOUserClientMethodArguments completion value is always NULL

I'm trying to use IOConnectCallAsyncStructMethod in order set a callback between a client and a driver in DriverKit for iPadOS.
This is how I call IOConnectCallAsyncStructMethod
ret = IOConnectCallAsyncStructMethod(connection, MessageType_RegisterAsyncCallback, masterPort, asyncRef, kIOAsyncCalloutCount, nullptr, 0, &outputAssignCallback, &outputSize);
Where asyncRef is:
asyncRef[kIOAsyncCalloutFuncIndex] = (io_user_reference_t)AsyncCallback;
asyncRef[kIOAsyncCalloutRefconIndex] = (io_user_reference_t)nullptr;
and AsyncCallback is:
static void AsyncCallback(void* refcon, IOReturn result, void** args, uint32_t numArgs)
{
const char* funcName = nullptr;
uint64_t* arrArgs = (uint64_t*)args;
ReadDataStruct* output = (ReadDataStruct*)(arrArgs + 1);
switch (arrArgs[0])
{
case 1:
{
funcName = "'Register Async Callback'";
} break;
case 2:
{
funcName = "'Async Request'";
} break;
default:
{
funcName = "UNKNOWN";
} break;
}
printf("Got callback of %s from dext with returned data ", funcName);
printf("with return code: 0x%08x.\n", result);
// Stop the run loop so our program can return to normal processing.
CFRunLoopStop(globalRunLoop);
}
But IOConnectCallAsyncStructMethod is always returning kIOReturnBadArgument and I can see that when the method:
kern_return_t MyDriverClient::ExternalMethod(uint64_t selector, IOUserClientMethodArguments* arguments, const IOUserClientMethodDispatch* dispatch, OSObject* target, void* reference) {
kern_return_t ret = kIOReturnSuccess;
if (selector < NumberOfExternalMethods)
{
dispatch = &externalMethodChecks[selector];
if (!target)
{
target = this;
}
}
return super::ExternalMethod(selector, arguments, dispatch, target, reference);
is called, in IOUserClientMethodArguments* arguments, completion is completion =(OSAction •) NULL
This is the IOUserClientMethodDispatch I use to check the values:
[ExternalMethodType_RegisterAsyncCallback] =
{
.function = (IOUserClientMethodFunction) &Mk1dDriverClient::StaticRegisterAsyncCallback,
.checkCompletionExists = true,
.checkScalarInputCount = 0,
.checkStructureInputSize = 0,
.checkScalarOutputCount = 0,
.checkStructureOutputSize = sizeof(ReadDataStruct),
},
Any idea what I'm doing wrong? Or any other ideas?
The likely cause for kIOReturnBadArgument:
The port argument in your method call looks suspicious:
IOConnectCallAsyncStructMethod(connection, MessageType_RegisterAsyncCallback, masterPort, …
------------------------------------------------------------------------------^^^^^^^^^^
If you're passing the IOKit main/master port (kIOMasterPortDefault) into here, that's wrong. The purpose of this argument is to provide a notification Mach port which will receive the async completion message. You'll want to create a port and schedule it on an appropriate dispatch queue or runloop. I typically use something like this:
// Save this somewhere for the entire time you might receive notification callbacks:
IONotificationPortRef notify_port = IONotificationPortCreate(kIOMasterPortDefault);
// Set the GCD dispatch queue on which we want callbacks called (can be main queue):
IONotificationPortSetDispatchQueue(notify_port, callback_dispatch_queue);
// This is what you pass to each async method call:
mach_port_t callback_port = IONotificationPortGetMachPort(notify_port);
And once you're done with the notification port, make sure to destroy it using IONotificationPortDestroy().
It looks like you might be using runloops. In that case, instead of calling IONotificationPortSetDispatchQueue, you can use the IONotificationPortGetRunLoopSource function to get the notification port's runloop source, which you can then schedule on the CFRunloop object you're using.
Some notes about async completion arguments:
You haven't posted your DriverKit side AsyncCompletion() call, and at any rate this isn't causing your immediate problem, but will probably blow up once you fix the async call itself:
If your async completion passes only 2 user arguments, you're using the wrong callback function signature on the app side. Instead of IOAsyncCallback you must use the IOAsyncCallback2 form.
Also, even if you are passing 3 or more arguments where the IOAsyncCallback form is correct, I believe this code technically triggers undefined behaviour due to aliasing rules:
uint64_t* arrArgs = (uint64_t*)args;
ReadDataStruct* output = (ReadDataStruct*)(arrArgs + 1);
switch (arrArgs[0])
The following would I think be correct:
ReadDataStruct* output = (ReadDataStruct*)(args + 1);
switch ((uintptr_t)args[0])
(Don't cast the array pointer itself, cast each void* element.)
Notes about async output struct arguments
I notice you have a struct output argument in your async method call, with a buffer that looks fairly small. If you're planning to update that with data on the DriverKit side after the initial ExternalMethod returns, you may be in for a surprise: an output struct arguments that is not passed as IOMemoryDescriptor will be copied to the app side immediately on method return, not when the async completion is triggered.
So how do you fix this? For very small data, pass it in the async completion arguments themselves. For arbitrarily sized byte buffers, the only way I know of is to ensure the output struct argument is passed via IOMemoryDescriptor, which can be persistently memory-mapped in a shared mapping between the driver and the app process. OK, how do you pass it as a memory descriptor? Basically, the output struct must be larger than 4096 bytes. Yes, this essentially means that if you have to make your buffer unnaturally large.

how to implement node-nan callback using node-addon-api

Until now I've only implemented synchronous node-addon-api methods, i.e., a JavaScript function makes a call, work is done, and the addon returns. I have big gaps in knowledge when it comes to the inner workings of v8, libuv, and node, so please correct any obvious misconceptions.
The goal is to call a JavaScript callback when C++ garbage collection callbacks are called from v8. I originally just called the JavaScript callback from the v8 garbage collection callback but that ended up with a segv after a couple calls. It seems that just making a call into JavaScript while being called from a v8 callback has some problems (v8 docs the callbacks shouldn't allocate objects). So I looked around and found a Nan-based example that uses libuv and Nan's AsyncResource to make the callback. The following approach works using node-nan:
NAN_GC_CALLBACK(afterGC) {
uint64_t et = uv_hrtime() - gcStartTime;
// other bookkeeping for GCData_t raw.
if (doCallbacks) {
uv_async_t* async = new uv_async_t;
GCData_t* data = new GCData_t;
*data = raw;
data->gcTime = et;
async->data = data;
uv_async_init(uv_default_loop(), async, asyncCB);
uv_async_send(async);
}
}
class GCResponseResource : public Nan::AsyncResource {
public:
GCResponseResource(Local<Function> callback_)
: Nan::AsyncResource("nan:gcstats.DeferredCallback") {
callback.Reset(callback_);
}
~GCResponseResource() {
callback.Reset();
}
Nan::Persistent<Function> callback;
};
static GCResponseResource* asyncResource;
static void closeCB(uv_handle_t *handle) {
delete handle;
}
static void asyncCB(uv_async_t *handle) {
Nan::HandleScope scope;
GCData_t* data = static_cast<GCData_t*>(handle->data);
Local<Object> obj = Nan::New<Object>();
Nan::Set(obj, Nan::New("gcCount").ToLocalChecked(),
Nan::New<Number>((data->gcCount));
Nan::Set(obj, Nan::New("gcTime").ToLocalChecked(),
Nan::New<Number>(data->gcTime));
Local<Object> counts = Nan::New<v8::Object>();
for (int i = 0; i < maxTypeCount; i++) {
if (data->typeCounts[i] != 0) {
Nan::Set(counts, i, Nan::New<Number>(data->typeCounts[i]));
}
}
Nan::Set(obj, Nan::New("gcTypeCounts").ToLocalChecked(), counts);
Local<Value> arguments[] = {obj};
Local<Function> callback = Nan::New(asyncResource->callback);
v8::Local<v8::Object> target = Nan::New<v8::Object>();
asyncResource->runInAsyncScope(target, callback, 1, arguments);
delete data;
uv_close((uv_handle_t*) handle, closeCB);
}
My question is how would I do this using the node-addon-api instead of nan?
It's not clear to me what the node-addon-api equivalent of uv_async_init, uv_async_send, etc are. This is partially because it's not clear to me what underlying N-API (as opposed to node-addon-api) functions are required.
I have been unable to find an example like this. The callback example is completely synchronous. The async pi example uses a worker thread to perform a task but that seems overkill compared to the approach in the nan-based code using the uv primitives.
Your example is not really asynchronous, because the GC callbacks run in the main thread. However when the JS world is stopped because of the GC, this does not mean that it is stopped in a way allowing a callback to run - as the GC can stop it in the middle of a function.
You need a ThreadSafeFunction to do this. Look here for an example:
https://github.com/nodejs/node-addon-api/blob/main/doc/threadsafe_function.md

Readable node stream to native c++ addon InputStream

Conceptually what I'm trying to do is very simple. I have a Readable stream in node, and I'm passing that to a native c++ addon where I want to connect that to an IInputStream.
The native library that I'm using works like many c++ (or Java) streaming interfaces that I've seen. The library provides an IInputStream interface (technically an abstract class), which I inherit from and override the virtual functions. Looks like this:
class JsReadable2InputStream : public IInputStream {
public:
// Constructor takes a js v8 object, makes a stream out of it
JsReadable2InputStream(const v8::Local<v8::Object>& streamObj);
~JsReadable2InputStream();
/**
* Blocking read. Blocks until the requested amount of data has been read. However,
* if the stream reaches its end before the requested amount of bytes has been read
* it returns the number of bytes read thus far.
*
* #param begin memory into which read data is copied
* #param byteCount the requested number of bytes
* #return the number of bytes actually read. Is less than bytesCount iff
* end of stream has been reached.
*/
virtual int read(char* begin, const int byteCount) override;
virtual int available() const override;
virtual bool isActive() const override;
virtual void close() override;
private:
Nan::Persistent<v8::Object> _stream;
bool _active;
JsEventLoopSync _evtLoop;
};
Of these functions, the important one here is read. The native library will call this function when it wants more data, and the function must block until it is able to return the requested data (or the stream ends). Here's my implementation of read:
int JsReadable2InputStream::read(char* begin, const int byteCount) {
if (!this->_active) { return 0; }
int read = -1;
while (read < 0 && this->_active) {
this->_evtLoop.invoke(
(voidLambda)[this,&read,begin,byteCount](){
v8::Local<v8::Object> stream = Nan::New(this->_stream);
const v8::Local<v8::Function> readFn = Nan::To<v8::Function>(Nan::Get(stream, JS_STR("read")).ToLocalChecked()).ToLocalChecked();
v8::Local<v8::Value> argv[] = { Nan::New<v8::Number>(byteCount) };
v8::Local<v8::Value> result = Nan::Call(readFn, stream, 1, argv).ToLocalChecked();
if (result->IsNull()) {
// Somewhat hacky/brittle way to check if stream has ended, but it's the only option
v8::Local<v8::Object> readableState = Nan::To<v8::Object>(Nan::Get(stream, JS_STR("_readableState")).ToLocalChecked()).ToLocalChecked();
if (Nan::To<bool>(Nan::Get(readableState, JS_STR("ended")).ToLocalChecked()).ToChecked()) {
// End of stream, all data has been read
this->_active = false;
read = 0;
return;
}
// Not enough data available, but stream is still open.
// Set a flag for the c++ thread to go to sleep
// This is the case that it gets stuck in
read = -1;
return;
}
v8::Local<v8::Object> bufferObj = Nan::To<v8::Object>(result).ToLocalChecked();
int len = Nan::To<int32_t>(Nan::Get(bufferObj, JS_STR("length")).ToLocalChecked()).ToChecked();
char* buffer = node::Buffer::Data(bufferObj);
if (len < byteCount) {
this->_active = false;
}
// copy the data out of the buffer
if (len > 0) {
std::memcpy(begin, buffer, len);
}
read = len;
}
);
if (read < 0) {
// Give js a chance to read more data
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
}
return read;
}
The idea is, the c++ code keeps a reference to the node stream object. When the native code wants to read, it has to synchronize with the node event loop, then attempt to invoke read on the node stream. If the node stream returns null, this indicates that the data isn't ready, so the native thread sleeps, giving the node event loop thread a chance to run and fill its buffers.
This solution works perfectly for a single stream, or even 2 or 3 streams running in parallel. Then for some reason when I hit the magical number of 4+ parallel streams, this totally deadlocks. None of the streams can successfully read any bytes at all. The above while loop runs infinitely, with the call into the node stream returning null every time.
It is behaving as though node is getting starved, and the streams never get a chance to populate with data. However, I've tried adjusting the sleep duration (to much larger values, and randomized values) and that had no effect. It is also clear that the event loop continues to run, since my lambda function continues to get executed there (I put some printfs inside to confirm this).
Just in case it might be relevant (I don't think it is), I'm also including my implementation of JsEventLoopSync. This uses libuv to schedule a lambda to be executed on the node event loop. It is designed such that only one can be scheduled at a time, and other invocations must wait until the first completes.
#include <nan.h>
#include <functional>
// simplified type declarations for the lambda functions
using voidLambda = std::function<void ()>;
// Synchronize with the node v8 event loop. Invokes a lambda function on the event loop, where access to js objects is safe.
// Blocks execution of the invoking thread until execution of the lambda completes.
class JsEventLoopSync {
public:
JsEventLoopSync() : _destroyed(false) {
// register on the default (same as node) event loop, so that we can execute callbacks in that context
// This takes a function pointer, which only works with a static function
this->_handles = new async_handles_t();
this->_handles->inst = this;
uv_async_init(uv_default_loop(), &this->_handles->async, JsEventLoopSync::_processUvCb);
// mechanism for passing this instance through to the native uv callback
this->_handles->async.data = this->_handles;
// mutex has to be initialized
uv_mutex_init(&this->_handles->mutex);
uv_cond_init(&this->_handles->cond);
}
~JsEventLoopSync() {
uv_mutex_lock(&this->_handles->mutex);
// prevent access to deleted instance by callback
this->_destroyed = true;
uv_mutex_unlock(&this->_handles->mutex);
// NOTE: Important, this->_handles must be a dynamically allocated pointer because uv_close() is
// async, and still has a reference to it. If it were statically allocated as a class member, this
// destructor would free the memory before uv_close was done with it (leading to asserts in libuv)
uv_close(reinterpret_cast<uv_handle_t*>(&this->_handles->async), JsEventLoopSync::_asyncClose);
}
// called from the native code to invoke the function
void invoke(const voidLambda& fn) {
if (v8::Isolate::GetCurrent() != NULL) {
// Already on the event loop, process now
return fn();
}
// Need to sync with the event loop
uv_mutex_lock(&this->_handles->mutex);
if (this->_destroyed) { return; }
this->_fn = fn;
// this will invoke processUvCb, on the node event loop
uv_async_send(&this->_handles->async);
// wait for it to complete processing
uv_cond_wait(&this->_handles->cond, &this->_handles->mutex);
uv_mutex_unlock(&this->_handles->mutex);
}
private:
// pulls data out of uv's void* to call the instance method
static void _processUvCb(uv_async_t* handle) {
if (handle->data == NULL) { return; }
auto handles = static_cast<async_handles_t*>(handle->data);
handles->inst->_process();
}
inline static void _asyncClose(uv_handle_t* handle) {
auto handles = static_cast<async_handles_t*>(handle->data);
handle->data = NULL;
uv_mutex_destroy(&handles->mutex);
uv_cond_destroy(&handles->cond);
delete handles;
}
// Creates the js arguments (populated by invoking the lambda), then invokes the js function
// Invokes resultLambda on the result
// Must be run on the node event loop!
void _process() {
if (v8::Isolate::GetCurrent() == NULL) {
// This is unexpected!
throw std::logic_error("Unable to sync with node event loop for callback!");
}
uv_mutex_lock(&this->_handles->mutex);
if (this->_destroyed) { return; }
Nan::HandleScope scope; // looks unused, but this is very important
// invoke the lambda
this->_fn();
// signal that we're done
uv_cond_signal(&this->_handles->cond);
uv_mutex_unlock(&this->_handles->mutex);
}
typedef struct async_handles {
uv_mutex_t mutex;
uv_cond_t cond;
uv_async_t async;
JsEventLoopSync* inst;
} async_handles_t;
async_handles_t* _handles;
voidLambda _fn;
bool _destroyed;
};
So, what am I missing? Is there a better way to wait for the node thread to get a chance to run? Is there a totally different design pattern that would work better? Does node have some upper limit on the number of streams that it can process at once?
As it turns out, the problems that I was seeing were actually client-side limitations. Browsers (and seemingly also node) have a limit on the number of open TCP connections to the same origin. I worked around this by spawning multiple node processes to do my testing.
If anyone is trying to do something similar, the code I shared is totally viable. If I ever have some free time, I might make it into a library.

C++ scope and Google V8 script context

I have the following, almost working piece of code written in c++:
[..]
Handle<Object> jsGlobal;
Handle<Function> jsUpdateFunc;
void setupJs () {
V8::Initialize();
Isolate* isolate = v8::Isolate::New();
Isolate::Scope isolate_scope(isolate);
HandleScope handle_scope(isolate);
Local<Context> context = Context::New(isolate);
Context::Scope context_scope(context);
Local<String> source = String::NewFromUtf8(isolate, "var a = 0; function test() { a++; return a.toString(); }");
Local<Script> script = Script::Compile(source);
script->Run();
jsGlobal = context->Global();
Handle<Value> value = jsGlobal->Get(String::NewFromUtf8(isolate, "test"));
jsUpdateFunc = Handle<Function>::Cast(value);
}
void callJs() {
Handle<Value> args[0];
Handle<Value> js_result = jsUpdateFunc->Call(jsGlobal, 0, args);
js_result->ToString();
String::Utf8Value utf8(js_result);
printf("%s\n", *utf8);
}
[..]
I have the function setupJs() setup the v8 environment and callJs is supposed to be called multiple times (when working, the javascript script increments var a by one each time).
If i put
Handle<Value> args[0];
Handle<Value> js_result = jsUpdateFunc->Call(jsGlobal, 0, args);
js_result->ToString();
String::Utf8Value utf8(js_result);
printf("%s\n", *utf8);
in setupJs, I can see how the function s called and "1" is printed. But if I leave the function call withing a different function to be called later, i have a Segfault at the line Handle<Value> js_result = jsUpdateFunc->Call(jsGlobal, 0, args);
I've checked and both jsUpdateFunc and jsGlobal are not-null pointers
You need to use persistent handles for jsGlobal and jsUpdateFunc. A normal (local) handle becomes invalid when its enclosing v8::HandleScope is destroyed.
You'll also want a global variable for the v8::Isolate pointer and another one for a persistent handle to the v8::Context.
To call the the script function later, you need to:
Lock the isolate (which you really should do in setupJs as well; see v8::Locker)
Enter the isolate (see v8::Isolate::Scope).
Establish a handle scope (see v8::HandleScope).
Create a local handle for the context.
Enter the context (see v8::Context::Scope).
Create local handles for jsGlobal and jsUpdateFunc.
Call the script function as above.
Look for v8::Persistent and related templates in the V8 header file.

Calling callback from node.js native code

I'm writing an add-on for node.js using c++.
here some snippets:
class Client : public node::ObjectWrap, public someObjectObserver {
public:
void onAsyncMethodEnds() {
Local<Value> argv[] = { Local<Value>::New(String::New("TheString")) };
this->callback->Call(Context::GetCurrent()->Global(), 1, argv);
}
....
private:
static v8::Handle<v8::Value> BeInitiator(const v8::Arguments& args) {
HandleScope scope;
Client* client = ObjectWrap::Unwrap<Client>(args.This());
client->someObject->asyncMethod(client, NULL);
return scope.Close(Boolean::New(true));
}
static v8::Handle<v8::Value> SetCallback(const v8::Arguments& args) {
HandleScope scope;
Client* client = ObjectWrap::Unwrap<Client>(args.This());
client->callback = Persistent<Function>::New(Handle<Function>::Cast(args[0]));
return scope.Close(Boolean::New(true));
}
I need to save a javascript function as callback to call it later.
The Client class is an observer for another object and the javascript callback should be called from onAsyncMethodEnds.
Unfortunately when I call the function "BeInitiator" I receive "Bus error: 10" error just before the callback Call()
thanks in advice
You cannot ->Call from another thread. JavaScript and Node are single threaded and attempting to call a function from another thread amounts to trying to run two threads of JS at once.
You should either re-work your code to not do that, or you should read up on libuv's threading library. It provides uv_async_send which can be used to trigger callback in the main JS loop from a separate thread.
There are docs here: http://nikhilm.github.io/uvbook/threads.html