Native v8::Promise Result - c++

I'm trying to call a JS-function from C++ using v8/Nan which in turn returns a Promise.
Assuming I have a generic Nan Callback
Nan::Callback fn
I then call this function using the following code
Nan::AsyncResource resource(Nan::New<v8::String>("myresource").ToLocalChecked());
Nan::MaybeLocal<v8::Value> value = resource.runInAsyncScope(Nan::GetCurrentContext()->Global(), fn, 0, 0);
The function is being called correctly, and I receive the promise on the C++ side
v8::Handle<v8::Promise> promiseReturnObject =
v8::Handle<v8::Promise>::Cast ( value.ToLocalChecked() );
I can then check the state of the promise using
v8::Promise::PromiseState promiseState = promiseReturnObject->State();
Of course at the time the promise is still pending, and I can't access it's result. The only way I've found so far to receive the result of that promise is by using the Then method on the promiseReturnObject.
promiseReturnObject->Then(Nan::GetCurrentContext(), callbackFn);
Is there any way to retreive that result synchronously in the scope of the function that calls fn? I've tried using std::promise and passing it to as a data argument to v8::FunctionTemplate of callbackFn, but calling wait or get on the respective std::future blocks the execution and the promise is never fulfilled. Do I need to resort to callbacks?
Any help or idea on how I could set this up would be much appreciated.

I derived an answer from https://github.com/nodejs/node/issues/5691
if (result->IsPromise()) {
Local<Promise> promise = result.As<Promise>();
if (promise->HasHandler()) {
while (promise->State() == Promise::kPending) {
Isolate::GetCurrent()->RunMicrotasks();
}
if (promise->State() == Promise::kRejected) {
Nan::ThrowError(promise->Result());
}
else
{
// ... procses promise->Result() ...
}
}
}

Related

IOUserClientMethodArguments completion value is always NULL

I'm trying to use IOConnectCallAsyncStructMethod in order set a callback between a client and a driver in DriverKit for iPadOS.
This is how I call IOConnectCallAsyncStructMethod
ret = IOConnectCallAsyncStructMethod(connection, MessageType_RegisterAsyncCallback, masterPort, asyncRef, kIOAsyncCalloutCount, nullptr, 0, &outputAssignCallback, &outputSize);
Where asyncRef is:
asyncRef[kIOAsyncCalloutFuncIndex] = (io_user_reference_t)AsyncCallback;
asyncRef[kIOAsyncCalloutRefconIndex] = (io_user_reference_t)nullptr;
and AsyncCallback is:
static void AsyncCallback(void* refcon, IOReturn result, void** args, uint32_t numArgs)
{
const char* funcName = nullptr;
uint64_t* arrArgs = (uint64_t*)args;
ReadDataStruct* output = (ReadDataStruct*)(arrArgs + 1);
switch (arrArgs[0])
{
case 1:
{
funcName = "'Register Async Callback'";
} break;
case 2:
{
funcName = "'Async Request'";
} break;
default:
{
funcName = "UNKNOWN";
} break;
}
printf("Got callback of %s from dext with returned data ", funcName);
printf("with return code: 0x%08x.\n", result);
// Stop the run loop so our program can return to normal processing.
CFRunLoopStop(globalRunLoop);
}
But IOConnectCallAsyncStructMethod is always returning kIOReturnBadArgument and I can see that when the method:
kern_return_t MyDriverClient::ExternalMethod(uint64_t selector, IOUserClientMethodArguments* arguments, const IOUserClientMethodDispatch* dispatch, OSObject* target, void* reference) {
kern_return_t ret = kIOReturnSuccess;
if (selector < NumberOfExternalMethods)
{
dispatch = &externalMethodChecks[selector];
if (!target)
{
target = this;
}
}
return super::ExternalMethod(selector, arguments, dispatch, target, reference);
is called, in IOUserClientMethodArguments* arguments, completion is completion =(OSAction •) NULL
This is the IOUserClientMethodDispatch I use to check the values:
[ExternalMethodType_RegisterAsyncCallback] =
{
.function = (IOUserClientMethodFunction) &Mk1dDriverClient::StaticRegisterAsyncCallback,
.checkCompletionExists = true,
.checkScalarInputCount = 0,
.checkStructureInputSize = 0,
.checkScalarOutputCount = 0,
.checkStructureOutputSize = sizeof(ReadDataStruct),
},
Any idea what I'm doing wrong? Or any other ideas?
The likely cause for kIOReturnBadArgument:
The port argument in your method call looks suspicious:
IOConnectCallAsyncStructMethod(connection, MessageType_RegisterAsyncCallback, masterPort, …
------------------------------------------------------------------------------^^^^^^^^^^
If you're passing the IOKit main/master port (kIOMasterPortDefault) into here, that's wrong. The purpose of this argument is to provide a notification Mach port which will receive the async completion message. You'll want to create a port and schedule it on an appropriate dispatch queue or runloop. I typically use something like this:
// Save this somewhere for the entire time you might receive notification callbacks:
IONotificationPortRef notify_port = IONotificationPortCreate(kIOMasterPortDefault);
// Set the GCD dispatch queue on which we want callbacks called (can be main queue):
IONotificationPortSetDispatchQueue(notify_port, callback_dispatch_queue);
// This is what you pass to each async method call:
mach_port_t callback_port = IONotificationPortGetMachPort(notify_port);
And once you're done with the notification port, make sure to destroy it using IONotificationPortDestroy().
It looks like you might be using runloops. In that case, instead of calling IONotificationPortSetDispatchQueue, you can use the IONotificationPortGetRunLoopSource function to get the notification port's runloop source, which you can then schedule on the CFRunloop object you're using.
Some notes about async completion arguments:
You haven't posted your DriverKit side AsyncCompletion() call, and at any rate this isn't causing your immediate problem, but will probably blow up once you fix the async call itself:
If your async completion passes only 2 user arguments, you're using the wrong callback function signature on the app side. Instead of IOAsyncCallback you must use the IOAsyncCallback2 form.
Also, even if you are passing 3 or more arguments where the IOAsyncCallback form is correct, I believe this code technically triggers undefined behaviour due to aliasing rules:
uint64_t* arrArgs = (uint64_t*)args;
ReadDataStruct* output = (ReadDataStruct*)(arrArgs + 1);
switch (arrArgs[0])
The following would I think be correct:
ReadDataStruct* output = (ReadDataStruct*)(args + 1);
switch ((uintptr_t)args[0])
(Don't cast the array pointer itself, cast each void* element.)
Notes about async output struct arguments
I notice you have a struct output argument in your async method call, with a buffer that looks fairly small. If you're planning to update that with data on the DriverKit side after the initial ExternalMethod returns, you may be in for a surprise: an output struct arguments that is not passed as IOMemoryDescriptor will be copied to the app side immediately on method return, not when the async completion is triggered.
So how do you fix this? For very small data, pass it in the async completion arguments themselves. For arbitrarily sized byte buffers, the only way I know of is to ensure the output struct argument is passed via IOMemoryDescriptor, which can be persistently memory-mapped in a shared mapping between the driver and the app process. OK, how do you pass it as a memory descriptor? Basically, the output struct must be larger than 4096 bytes. Yes, this essentially means that if you have to make your buffer unnaturally large.

What does this size_t in the lambda do? C++ code

I'm new to programming in C++, and I came across this syntax. Could someone explain the point of the size_t in this syntax?
// Close the file stream.
.then([=](size_t)
{
return fileStream->close();
});
It's the type of the argument passed to the function. The argument is not used in the function. Hence, it is not named. Only the type of the argument is there.
The type of the argument is there presumably because the client to which the lambda expression is passed expects it to have an argument of type size_t. The client has no way of knowing how the argument is used in the lambda expression or whether it is used at all.
This is like callbacks where your callback receive data from the caller and you do whatever you want with the data .
So if you don't need the data you can skip naming the parameter as it's unreferenced
You can see more examples about callbacks by reading the documentation of some winapi functions especially which enum things . e.g EnumWindows , EnumChildWindows EnumProc ....
As others have said, the lambda expression
[=](size_t)
{
return fileStream->close();
}
is being passed to a method call
.then()
To shed some additional light: usually, a method called .then() is part of a Futures callback interface. The then() method is called on a Future<T> object, where T is some type. It will expect a callback parameter. This causes callback chaining: when the Future<T> is fulfilled, we will have a T, and at this point in time the callback is invoked with that T.
In your case, T = size_t. So presumably, the Future object that .then() is called on returns a size_t, which is then passed to the lambda [=] (size_t) { ... }. The lambda then discards the size_t because it doesn't need it.
What's the point of taking the size_t parameter if it doesn't need it? Well, maybe the original Future object was some kind of read call, and it stored the result somewhere else (i.e. the work is done by side-effect) and returned the number of bytes it read (the size_t). But the callback is just doing some cleanup work and doesn't care about what was read. It would be like the following synchronous code:
size_t readFile(char* buf) {
// ... store stuff in buf
return bytesRead;
}
auto closeFileStream(size_t) {
return fileStream->close();
}
closeFileStream(readFile(&buf));
In terms of Futures, it's probably something more like:
Future<size_t> readFile(char* buf) {
// ... asynchronously store stuff in buf
// and return bytesRead as a Future
}
auto closeFileStream(size_t) {
return fileStream->close();
}
readFile(&buf)
.then(closeFileStream)
.get(); // wait synchronously

What's a proper way to use set_alert_notify to wake up main thread?

I'm trying to write my own torrent program based on libtorrent rasterbar and I'm having problems getting the alert mechanism working correctly. Libtorrent offers function
void set_alert_notify (boost::function<void()> const& fun);
which is supposed to
The intention of of the function is that the client wakes up its main thread, to poll for more alerts using pop_alerts(). If the notify function fails to do so, it won't be called again, until pop_alerts is called for some other reason.
so far so good, I think I understand the intention behind this function. However, my actual implementation doesn't work so good. My code so far is like this:
std::unique_lock<std::mutex> ul(_alert_m);
session.set_alert_notify([&]() { _alert_cv.notify_one(); });
while (!_alert_loop_should_stop) {
if (!session.wait_for_alert(std::chrono::seconds(0))) {
_alert_cv.wait(ul);
}
std::vector<libtorrent::alert*> alerts;
session.pop_alerts(&alerts);
for (auto alert : alerts) {
LTi_ << alert->message();
}
}
however there is a race condition. If wait_for_alert returns NULL (since no alerts yet) but the function passed to set_alert_notify is called before _alert_cw.wait(ul);, the whole loop waits forever (because of second sentence from the quote).
For the moment my solution is just changing _alert_cv.wait(ul); to _alert_cv.wait_for(ul, std::chrono::milliseconds(250)); which reduces number of loops per second enough while keeping latency low enough.
But it's really more workaround then solution and I keep thinking there must be proper way to handle this.
You need a variable to record the notification. It should be protected by the same mutex that owns the condition variable.
bool _alert_pending;
session.set_alert_notify([&]() {
std::lock_guard<std::mutex> lg(_alert_m);
_alert_pending = true;
_alert_cv.notify_one();
});
std::unique_lock<std::mutex> ul(_alert_m);
while(!_alert_loop_should_stop) {
_alert_cv.wait(ul, [&]() {
return _alert_pending || _alert_loop_should_stop;
})
if(_alert_pending) {
_alert_pending = false;
ul.unlock();
session.pop_alerts(...);
...
ul.lock();
}
}

simple callback control using boost::asio and c++11 lambda

I'm implementing simple server with boost::asio and thinking of io-service-per-cpu model(each io_service has one thread).
What i want to do is, let an io_service to request some jobs to another io_service( something like message passing ).
I think boost::asio::io_service::post can help me.
There are two io_services, ios1,ios2,
and a job(function) bool func(arg *),
and a completion handler void callback(bool).
So I want ios1 to request a job, ios2 runs it and notify ios1 to finish and finally ios2 runs the handler.
ios2.post(
[&ios1, arg_ptr, callback, func]
{
bool result = func(arg_ptr);
ios1.post( []{ callback(result) } );
} );
Is this code works? and is there any smarter and simpler way?
EDIT:
I found that the second lamda inside the ios1.post() can't reach the function pointer callback. It's out of the scope... so I'm trying another way using boost::bind().
ios2.post(
[&ios1, arg_ptr, callback, func]
{
ios1.post( boost::bind( callback, func(arg_ptr) ) );
} );
I removed one stack variable bool and it seems better.
But using c++11 lambda and boost::bind together doesn't look so cool.
How can i do this without boost::bind?
I found that the second lamda inside the ios1.post() can't reach the function pointer callback. It's out of the scope
I don't think that's the problem.
You're trying to capture callback but that's not a function pointer, it's a function. You don't need to capture a function, you can just call it! The same applies to func, don't capture it just call it. Finally, your inner lambda refers to result without capturing it.
It will work if you fix these problems:
ios2.post(
[&ios1, arg_ptr]
{
bool result = func(arg_ptr);
ios1.post( [result]{ callback(result); } );
}
);
You're second version is not quite the same, because func(arg_ptr) will get run in the thread of ios1 not ios2, and I'm not sure either version fits your description:
So I want ios1 to request a job, ios2 runs it and notify ios1 to finish and finally ios2 runs the handler.
In both your code samples ios1 runs the callback handler.
#include <boost/asio/io_service.hpp>
#include <boost/function.hpp>
typedef int arg;
int main()
{
arg * arg_ptr;
boost::function<void(bool)> callback;
boost::function<bool(arg *)> func;
boost::asio::io_service ios1, ios2;
ios2.post(
[&ios1, arg_ptr, callback, func]
{
bool result = func(arg_ptr);
auto callback1 = callback;
ios1.post( [=]{ callback1(result); } );
} );
}

Pattern for future conversion

currently we are using asynchronous values very heavily.
Assume that I have a function which does something like this:
int do_something(const boost::posix_time::time_duration& sleep_time)
{
BOOST_MESSAGE("Sleeping a bit");
boost::this_thread::sleep(sleep_time);
BOOST_MESSAGE("Finished taking a nap");
return 42;
}
At some point in code we create a task which creates a future to such an int value which will be set by a packaged_task - like this (worker_queue is a boost::asio::io_service in this example):
boost::unique_future<int> createAsynchronousValue(const boost::posix_time::seconds& sleep)
{
boost::shared_ptr< boost::packaged_task<int> > task(
new boost::packaged_task<int>(boost::bind(do_something, sleep)));
boost::unique_future<int> ret = task->get_future();
// Trigger execution
working_queue.post(boost::bind(&boost::packaged_task<int>::operator (), task));
return boost::move(ret);
}
At another point in code I want to wrap this function to return some higher level object which should also be a future. I need a conversion function which takes the first value and transforms it to another value (in our actual code we have some layering and doing asynchronous RPC which returns futures to responses - these responses should be converted to futures to real objects, PODs or even void future to be able to wait on it or catch exceptions). So this is the conversion function in this example:
float converter(boost::shared_future<int> value)
{
BOOST_MESSAGE("Converting value " << value.get());
return 1.0f * value.get();
}
Then I thought of creating a lazy future as described in the Boost docs to do this conversion only if wanted:
void invoke_lazy_task(boost::packaged_task<float>& task)
{
try
{
task();
}
catch(boost::task_already_started&)
{}
}
And then I have a function (might be a higher level API) to create a wrapped future:
boost::unique_future<float> createWrappedFuture(const boost::posix_time::seconds& sleep)
{
boost::shared_future<int> int_future(createAsynchronousValue(sleep));
BOOST_MESSAGE("Creating converter task");
boost::packaged_task<float> wrapper(boost::bind(converter, int_future));
BOOST_MESSAGE("Setting wait callback");
wrapper.set_wait_callback(invoke_lazy_task);
BOOST_MESSAGE("Creating future to converter task");
boost::unique_future<float> future = wrapper.get_future();
BOOST_MESSAGE("Returning the future");
return boost::move(future);
}
At the end I want to be able to use it like this:
{
boost::unique_future<float> future = createWrappedFuture(boost::posix_time::seconds(1));
BOOST_MESSAGE("Waiting for the future");
future.wait();
BOOST_CHECK_EQUAL(future.get(), 42.0f);
}
But here I end up getting an exception about a broken promise. The reason seems to be pretty clear for me because the packaged_task which does the conversion goes out of scope.
So my questing is: How do I deal with such situations. How can I prevent the task from being destroyed? Is there a pattern for this?
Bests,
Ronny
You need to manage the lifetime of task object properly.
The most correct way is to return boost::packaged_task<float> instead of boost::unique_future<float> from createWrappedFuture(). The caller will be responsible to get future object and to prolongate task lifetime until future value is ready.
Or you can place task object into some 'pending' queue (global or class member) the similar way you did in createAsynchronousValue. But in this case you will need to explcitly manage task lifetime and remove it from queue after completion. So don't think this solution has advantages against returning task object itself.