Why we should pass pointer in listenHTTP to &handleRequest? - d

I am reading vibed examples and can't understand next moment:
import vibe.d;
shared static this()
{
auto settings = new HTTPServerSettings;
settings.port = 8080;
listenHTTP(settings, &handleRequest);
}
void handleRequest(HTTPServerRequest req,
HTTPServerResponse res)
{
if (req.path == "/")
res.writeBody("Hello, World!", "text/plain");
}
Why we are passing in listenHTTP pointer to &handleRequest. I mean why we can't simply call it for every request?
And what about HTTPServerRequest req and HTTPServerResponse res? Are they are creating in moment of handleRequest calling or when?

The pointer is how the library knows what function you want it to call on every request.
You pass it the pointer to the function, then vibe creates the req and res at that point and calls the pointed to function each time it gets a new request.
If you tried to pass handleRequest without the pointer, it would try to call it at setup time, before a request was actually ready.

Related

IOUserClientMethodArguments completion value is always NULL

I'm trying to use IOConnectCallAsyncStructMethod in order set a callback between a client and a driver in DriverKit for iPadOS.
This is how I call IOConnectCallAsyncStructMethod
ret = IOConnectCallAsyncStructMethod(connection, MessageType_RegisterAsyncCallback, masterPort, asyncRef, kIOAsyncCalloutCount, nullptr, 0, &outputAssignCallback, &outputSize);
Where asyncRef is:
asyncRef[kIOAsyncCalloutFuncIndex] = (io_user_reference_t)AsyncCallback;
asyncRef[kIOAsyncCalloutRefconIndex] = (io_user_reference_t)nullptr;
and AsyncCallback is:
static void AsyncCallback(void* refcon, IOReturn result, void** args, uint32_t numArgs)
{
const char* funcName = nullptr;
uint64_t* arrArgs = (uint64_t*)args;
ReadDataStruct* output = (ReadDataStruct*)(arrArgs + 1);
switch (arrArgs[0])
{
case 1:
{
funcName = "'Register Async Callback'";
} break;
case 2:
{
funcName = "'Async Request'";
} break;
default:
{
funcName = "UNKNOWN";
} break;
}
printf("Got callback of %s from dext with returned data ", funcName);
printf("with return code: 0x%08x.\n", result);
// Stop the run loop so our program can return to normal processing.
CFRunLoopStop(globalRunLoop);
}
But IOConnectCallAsyncStructMethod is always returning kIOReturnBadArgument and I can see that when the method:
kern_return_t MyDriverClient::ExternalMethod(uint64_t selector, IOUserClientMethodArguments* arguments, const IOUserClientMethodDispatch* dispatch, OSObject* target, void* reference) {
kern_return_t ret = kIOReturnSuccess;
if (selector < NumberOfExternalMethods)
{
dispatch = &externalMethodChecks[selector];
if (!target)
{
target = this;
}
}
return super::ExternalMethod(selector, arguments, dispatch, target, reference);
is called, in IOUserClientMethodArguments* arguments, completion is completion =(OSAction •) NULL
This is the IOUserClientMethodDispatch I use to check the values:
[ExternalMethodType_RegisterAsyncCallback] =
{
.function = (IOUserClientMethodFunction) &Mk1dDriverClient::StaticRegisterAsyncCallback,
.checkCompletionExists = true,
.checkScalarInputCount = 0,
.checkStructureInputSize = 0,
.checkScalarOutputCount = 0,
.checkStructureOutputSize = sizeof(ReadDataStruct),
},
Any idea what I'm doing wrong? Or any other ideas?
The likely cause for kIOReturnBadArgument:
The port argument in your method call looks suspicious:
IOConnectCallAsyncStructMethod(connection, MessageType_RegisterAsyncCallback, masterPort, …
------------------------------------------------------------------------------^^^^^^^^^^
If you're passing the IOKit main/master port (kIOMasterPortDefault) into here, that's wrong. The purpose of this argument is to provide a notification Mach port which will receive the async completion message. You'll want to create a port and schedule it on an appropriate dispatch queue or runloop. I typically use something like this:
// Save this somewhere for the entire time you might receive notification callbacks:
IONotificationPortRef notify_port = IONotificationPortCreate(kIOMasterPortDefault);
// Set the GCD dispatch queue on which we want callbacks called (can be main queue):
IONotificationPortSetDispatchQueue(notify_port, callback_dispatch_queue);
// This is what you pass to each async method call:
mach_port_t callback_port = IONotificationPortGetMachPort(notify_port);
And once you're done with the notification port, make sure to destroy it using IONotificationPortDestroy().
It looks like you might be using runloops. In that case, instead of calling IONotificationPortSetDispatchQueue, you can use the IONotificationPortGetRunLoopSource function to get the notification port's runloop source, which you can then schedule on the CFRunloop object you're using.
Some notes about async completion arguments:
You haven't posted your DriverKit side AsyncCompletion() call, and at any rate this isn't causing your immediate problem, but will probably blow up once you fix the async call itself:
If your async completion passes only 2 user arguments, you're using the wrong callback function signature on the app side. Instead of IOAsyncCallback you must use the IOAsyncCallback2 form.
Also, even if you are passing 3 or more arguments where the IOAsyncCallback form is correct, I believe this code technically triggers undefined behaviour due to aliasing rules:
uint64_t* arrArgs = (uint64_t*)args;
ReadDataStruct* output = (ReadDataStruct*)(arrArgs + 1);
switch (arrArgs[0])
The following would I think be correct:
ReadDataStruct* output = (ReadDataStruct*)(args + 1);
switch ((uintptr_t)args[0])
(Don't cast the array pointer itself, cast each void* element.)
Notes about async output struct arguments
I notice you have a struct output argument in your async method call, with a buffer that looks fairly small. If you're planning to update that with data on the DriverKit side after the initial ExternalMethod returns, you may be in for a surprise: an output struct arguments that is not passed as IOMemoryDescriptor will be copied to the app side immediately on method return, not when the async completion is triggered.
So how do you fix this? For very small data, pass it in the async completion arguments themselves. For arbitrarily sized byte buffers, the only way I know of is to ensure the output struct argument is passed via IOMemoryDescriptor, which can be persistently memory-mapped in a shared mapping between the driver and the app process. OK, how do you pass it as a memory descriptor? Basically, the output struct must be larger than 4096 bytes. Yes, this essentially means that if you have to make your buffer unnaturally large.

Error: RefNonZero When Returning a Uniue_Ptr to ClientReader in GRPC

After defining a method of the following form:
std::unique_ptr<ClientReader<FlowCellPositionResponse> > method(FlowCellPositionsRequest request)
{
...
ClientContext context;
return stub->some_method(&context, request); // Also tried std::move
}
within a file and accessing this method via another file's method like so:
FlowCellPositionsRequest request;
FlowCellPositionsResponse response;
std::unique_ptr<ClientReader<FlowCellPositionResponse> > reader = file.method(request);
while(reader->Read(&response)) { // Error raised here
...
}
Status status = reader->Finish();
I get the following error:
Assertion failed: (prior > 0), function RefNonZero, file ref_counted.h, line 119.
[1] 2450 abort ./program
If I move this logic back into method, it runs fine, but I wanted to create this abstraction. I'm still quite new to both C++ and GRPC and I was just wondering what I'm doing wrong?
The ClientContext is going out of scope when method() returns, but that object needs to outlive the ClientReader<> object that you're returning.
I think what you probably want here is an object to hold all of the state needed for the RPC, including both the ClientContext and the ClientReader<>. Then you can return that object from method().

Right way to add a per-request response delay to a custom HttpService

Here's my current implementation of HttpService.serve()
#Override
public HttpResponse serve(ServiceRequestContext ctx, HttpRequest req) throws Exception {
return HttpResponse.from(req.aggregate().thenApply(ahr -> {
MyResponse myResponse = Utils.handle(ahr);
HttpResponse httpResponse Utils.toResponse(myResponse);
return httpResponse;
}));
}
I have a user-defined response delay which can vary per each individual request-response, and this is available in the myResponse object.
What is the best way to apply this delay in a non-blocking way, I can see some delay API-s but they are protected within HttpResponse . Any extra tips or pointers to the streaming API design or decorators would be helpful. I'm really learning a lot from the Armeria code base :)
If you know the desired delay even before consuming the request body, you can simply use HttpResponse.delayed():
#Override
public HttpResponse serve(ServiceRequestContext ctx, HttpRequest req) throws Exception {
return HttpResponse.delayed(
HttpResponse.of(200),
Duration.ofSeconds(3),
ctx.eventLoop());
}
If you need to consume the content or perform some operation to calculate the desired delay, you can combine HttpResponse.delayed() with HttpResponse.from():
#Override
public HttpResponse serve(ServiceRequestContext ctx, HttpRequest req) throws Exception {
return HttpResponse.from(req.aggregate().thenApply(ahr -> {
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
MyResponse myResponse = Utils.handle(ahr);
HttpResponse httpResponse = Utils.toResponse(myResponse);
Duration myDelay = Utils.delayMillis(...);
return HttpResponse.delayed(httpResponse, myDelay, ctx.eventLoop());
// ^^^^^^^
});
}
If the delay is not actually delay but waiting for something to happen, you can use CompletableFuture.thenCompose() which is more powerful:
#Override
public HttpResponse serve(ServiceRequestContext ctx, HttpRequest req) throws Exception {
return HttpResponse.from(req.aggregate().thenCompose(ahr -> {
// ^^^^^^^^^^^
My1stResponse my1stRes = Utils.handle(ahr);
// Schedule some asynchronous task that returns another future.
CompletableFuture<My2ndResponse> myFuture = doSomething(my1stRes);
// Map the future into an HttpResponse.
return myFuture.thenApply(my2ndRes -> {
HttpResponse httpRes = Utils.toResponse(my1stRes, my2ndRes);
return httpRes;
});
});
}
For even more complicated workflow, I'd recommend you to look into Reactive Streams implementations such as Project Reactor and RxJava, which provides the tools for avoiding the callback hell. 😄

Native v8::Promise Result

I'm trying to call a JS-function from C++ using v8/Nan which in turn returns a Promise.
Assuming I have a generic Nan Callback
Nan::Callback fn
I then call this function using the following code
Nan::AsyncResource resource(Nan::New<v8::String>("myresource").ToLocalChecked());
Nan::MaybeLocal<v8::Value> value = resource.runInAsyncScope(Nan::GetCurrentContext()->Global(), fn, 0, 0);
The function is being called correctly, and I receive the promise on the C++ side
v8::Handle<v8::Promise> promiseReturnObject =
v8::Handle<v8::Promise>::Cast ( value.ToLocalChecked() );
I can then check the state of the promise using
v8::Promise::PromiseState promiseState = promiseReturnObject->State();
Of course at the time the promise is still pending, and I can't access it's result. The only way I've found so far to receive the result of that promise is by using the Then method on the promiseReturnObject.
promiseReturnObject->Then(Nan::GetCurrentContext(), callbackFn);
Is there any way to retreive that result synchronously in the scope of the function that calls fn? I've tried using std::promise and passing it to as a data argument to v8::FunctionTemplate of callbackFn, but calling wait or get on the respective std::future blocks the execution and the promise is never fulfilled. Do I need to resort to callbacks?
Any help or idea on how I could set this up would be much appreciated.
I derived an answer from https://github.com/nodejs/node/issues/5691
if (result->IsPromise()) {
Local<Promise> promise = result.As<Promise>();
if (promise->HasHandler()) {
while (promise->State() == Promise::kPending) {
Isolate::GetCurrent()->RunMicrotasks();
}
if (promise->State() == Promise::kRejected) {
Nan::ThrowError(promise->Result());
}
else
{
// ... procses promise->Result() ...
}
}
}

Async client response with cpp-netlib?

I am considering using cpp netlib for a new project. All of the examples show reading the body of the response in a blocking manner:
client::response response_ = client_.get(request_);
std::string body_ = body(response_);
If I construct my client object with the async tag:
basic_client<http_async_8bit_udp_resolve_tags, 1, 1> client_();
What affect does that have?
Is it possible to get the results of the body wrapper as a boost::shared_future<std::string>?
Do I just need to wrap the blocking call in it's own thread?
Look at the current http client doc: http://cpp-netlib.org/0.12.0/reference/http_client.html
the http client is now always async by default
you have an option to provide a callback function or object in the get or post call:
struct body_handler {
explicit body_handler(std::string & body)
: body(body) {}
BOOST_NETWORK_HTTP_BODY_CALLBACK(operator(), range, error) {
// in here, range is the Boost.Range iterator_range, and error is
// the Boost.System error code.
if (!error)
body.append(boost::begin(range), boost::end(range));
}
std::string & body;
};
// somewhere else
std::string some_string;
response_ = client_.get(request("http://cpp-netlib.github.com/"),
body_handler(some_string));
The client::response object already incapsulates the future object:
The response object encapsulates futures which get filled in once the values are available.