Pipe Future result to self - akka

Is it safe to pipe a Future's result directly to 'self'?
Within an actor:
Future(hardWork()).pipeTo(self)
Or must we assign to a val:
val me = self
Future(hardWork()).pipeTo(me)

Everything in your code is safe, because you are not closing over anything. You are just calling a regular method pipeTo and passing in a regular parameter. Only closing over something (like you did in your own answer) might be dangerous, but in the case of self there is no danger because self is not mutable.

Apparently 'self' is safe, so no need for the val me = self.
// Completely safe, "self" is OK to close over
// and it's an ActorRef, which is thread-safe
Future { expensiveCalculation() } onComplete { f => self ! f.value.get }
http://doc.akka.io/docs/akka/2.2.3/general/jmm.html#Actors_and_shared_mutable_state

Related

Kotlin runTest with delay() is not working

I am testing a coroutine that blocks. Here is my production code:
interface Incrementer {
fun inc()
}
class MyViewModel : Incrementer, CoroutineScope {
override val coroutineContext: CoroutineContext
get() = Dispatchers.IO
private val _number = MutableStateFlow(0)
fun getNumber(): StateFlow<Int> = _number.asStateFlow()
override fun inc() {
launch(coroutineContext) {
delay(100)
_number.tryEmit(1)
}
}
}
And my test:
class IncTest {
#BeforeEach
fun setup() {
Dispatchers.setMain(StandardTestDispatcher())
}
#AfterEach
fun teardown() {
Dispatchers.resetMain()
}
#Test
fun incrementOnce() = runTest {
val viewModel = MyViewModel()
val results = mutableListOf<Int>()
val resultJob = viewModel.getNumber()
.onEach(results::add)
.launchIn(CoroutineScope(UnconfinedTestDispatcher(testScheduler)))
launch(StandardTestDispatcher(testScheduler)) {
viewModel.inc()
}.join()
assertEquals(listOf(0, 1), results)
resultJob.cancel()
}
}
How would I go about testing my inc() function? (The interface is carved in stone, so I can't turn inc() into a suspend function.)
There are two problems here:
You want to wait for the work done in the coroutine that viewModel.inc() launches internally.
Ideally, the 100ms delay should be fast-forwarded during tests so that it doesn't actually take 100ms to execute.
Let's start with problem #2 first: for this, you need to be able to modify MyViewModel (but not inc), and change the class so that instead of using a hardcoded Dispatchers.IO, it receives a CoroutineContext as a parameter. With this, you could pass in a TestDispatcher in tests, which would use virtual time to fast-forward the delay. You can see this pattern described in the Injecting TestDispatchers section of the Android docs.
class MyViewModel(coroutineContext: CoroutineContext) : Incrementer {
private val scope = CoroutineScope(coroutineContext)
private val _number = MutableStateFlow(0)
fun getNumber(): StateFlow<Int> = _number.asStateFlow()
override fun inc() {
scope.launch {
delay(100)
_number.tryEmit(1)
}
}
}
Here, I've also done some minor cleanup:
Made MyViewModel contain a CoroutineScope instead of implementing the interface, which is an officially recommended practice
Removed the coroutineContext parameter passed to launch, as it doesn't do anything in this case - the same context is in the scope anyway, so it'll already be used
For problem #1, waiting for work to complete, you have a few options:
If you've passed in a TestDispatcher, you can manually advance the coroutine created inside inc using testing methods like advanceUntilIdle. This is not ideal, because you're relying on implementation details a lot, and it's something you couldn't do in production. But it'll work if you can't use the nicer solution below.
viewModel.inc()
advanceUntilIdle() // Returns when all pending coroutines are done
The proper solution would be for inc to let its callers know when it's done performing its work. You could make it a suspending method instead of launching a new coroutine internally, but you stated that you can't modify the method to make it suspending. An alternative - if you're able to make this change - would be to create the new coroutine in inc using the async builder, returning the Deferred object that that creates, and then await()-ing at the call site.
override fun inc(): Deferred<Unit> {
scope.async {
delay(100)
_number.tryEmit(1)
}
}
// In the test...
viewModel.inc().await()
If you're not able to modify either the method or the class, there's no way to avoid the delay() call causing a real 100ms delay. In this case, you can force your test to wait for that amount of time before proceeding. A regular delay() within runTest would be fast-forwarded thanks to it using a TestDispatcher for the coroutine it creates, but you can get away with one of these solutions:
// delay() on a different dispatcher
viewModel.inc()
withContext(Dispatchers.Default) { delay(100) }
// Use blocking sleep
viewModel.inc()
Thread.sleep(100)
For some final notes about the test code:
Since you're doing Dispatchers.setMain, you don't need to pass in testScheduler into the TestDispatchers you create. They'll grab the scheduler from Main automatically if they find a TestDispatcher there, as described in its docs.
Instead of creating a new scope to pass in to launchIn, you could simply pass in this, the receiver of runTest, which points to a TestScope.

Pass context through composed promises in KJ

Playing with the KJ library, I wrote a small TCP servers that reads a "PING" and responds a "PONG". I managed to compose the promises like this:
char buffer[4];
kj::Own<kj::AsyncIoStream> clientStream;
addr->listen()->accept()
.then([&buffer, &clientStream](kj::Own<kj::AsyncIoStream> stream) {
clientStream = kj::mv(stream);
return clientStream->tryRead(buffer, 4, 4);
}).then([&buffer, &clientStream](size_t read) {
KJ_LOG(INFO, kj::str("Received", read, " bytes: ", buffer));
return clientStream->write("PONG", 4);
}).wait(waitScope);
I had to keep buffer out of the promises and pass a reference to it. This means that buffer has to stay in scope until the last promise finishes. That's the case here, but is there a solution in case it isn't?
Same thing for clientStream: I had to declare it before, then wait until I receives it from accept(), and at this point move it outside and use the reference to it.
Is there a better way to do it? Say like a way to pass some kind of context from promise to promise, always owned by the promises and therefore not having to stay "outside"?
It seems your problem is that your second lambda wants access to the scope of the first lambda, but the way you've organised things prevents that. You've worked around that by just adding variables to their shared "global" scope.
Instead, you could put the second lambda inside the first, something like this:
addr->listen()->accept()
.then([](kj::Own<kj::AsyncIoStream> stream) {
auto buffer = kj::heapArray<char>(4);
auto promise = stream->tryRead(buffer.begin(),4,4);
return promise.then([stream=kj::mv(stream), buffer=kj::mv(buffer)] (size_t read) mutable {
KJ_LOG(INFO, kj::str("Received", read, " bytes: ", buffer));
return stream->write("PONG", 4);
});
}).wait(waitScope);

Native v8::Promise Result

I'm trying to call a JS-function from C++ using v8/Nan which in turn returns a Promise.
Assuming I have a generic Nan Callback
Nan::Callback fn
I then call this function using the following code
Nan::AsyncResource resource(Nan::New<v8::String>("myresource").ToLocalChecked());
Nan::MaybeLocal<v8::Value> value = resource.runInAsyncScope(Nan::GetCurrentContext()->Global(), fn, 0, 0);
The function is being called correctly, and I receive the promise on the C++ side
v8::Handle<v8::Promise> promiseReturnObject =
v8::Handle<v8::Promise>::Cast ( value.ToLocalChecked() );
I can then check the state of the promise using
v8::Promise::PromiseState promiseState = promiseReturnObject->State();
Of course at the time the promise is still pending, and I can't access it's result. The only way I've found so far to receive the result of that promise is by using the Then method on the promiseReturnObject.
promiseReturnObject->Then(Nan::GetCurrentContext(), callbackFn);
Is there any way to retreive that result synchronously in the scope of the function that calls fn? I've tried using std::promise and passing it to as a data argument to v8::FunctionTemplate of callbackFn, but calling wait or get on the respective std::future blocks the execution and the promise is never fulfilled. Do I need to resort to callbacks?
Any help or idea on how I could set this up would be much appreciated.
I derived an answer from https://github.com/nodejs/node/issues/5691
if (result->IsPromise()) {
Local<Promise> promise = result.As<Promise>();
if (promise->HasHandler()) {
while (promise->State() == Promise::kPending) {
Isolate::GetCurrent()->RunMicrotasks();
}
if (promise->State() == Promise::kRejected) {
Nan::ThrowError(promise->Result());
}
else
{
// ... procses promise->Result() ...
}
}
}

What does this size_t in the lambda do? C++ code

I'm new to programming in C++, and I came across this syntax. Could someone explain the point of the size_t in this syntax?
// Close the file stream.
.then([=](size_t)
{
return fileStream->close();
});
It's the type of the argument passed to the function. The argument is not used in the function. Hence, it is not named. Only the type of the argument is there.
The type of the argument is there presumably because the client to which the lambda expression is passed expects it to have an argument of type size_t. The client has no way of knowing how the argument is used in the lambda expression or whether it is used at all.
This is like callbacks where your callback receive data from the caller and you do whatever you want with the data .
So if you don't need the data you can skip naming the parameter as it's unreferenced
You can see more examples about callbacks by reading the documentation of some winapi functions especially which enum things . e.g EnumWindows , EnumChildWindows EnumProc ....
As others have said, the lambda expression
[=](size_t)
{
return fileStream->close();
}
is being passed to a method call
.then()
To shed some additional light: usually, a method called .then() is part of a Futures callback interface. The then() method is called on a Future<T> object, where T is some type. It will expect a callback parameter. This causes callback chaining: when the Future<T> is fulfilled, we will have a T, and at this point in time the callback is invoked with that T.
In your case, T = size_t. So presumably, the Future object that .then() is called on returns a size_t, which is then passed to the lambda [=] (size_t) { ... }. The lambda then discards the size_t because it doesn't need it.
What's the point of taking the size_t parameter if it doesn't need it? Well, maybe the original Future object was some kind of read call, and it stored the result somewhere else (i.e. the work is done by side-effect) and returned the number of bytes it read (the size_t). But the callback is just doing some cleanup work and doesn't care about what was read. It would be like the following synchronous code:
size_t readFile(char* buf) {
// ... store stuff in buf
return bytesRead;
}
auto closeFileStream(size_t) {
return fileStream->close();
}
closeFileStream(readFile(&buf));
In terms of Futures, it's probably something more like:
Future<size_t> readFile(char* buf) {
// ... asynchronously store stuff in buf
// and return bytesRead as a Future
}
auto closeFileStream(size_t) {
return fileStream->close();
}
readFile(&buf)
.then(closeFileStream)
.get(); // wait synchronously

Pattern for future conversion

currently we are using asynchronous values very heavily.
Assume that I have a function which does something like this:
int do_something(const boost::posix_time::time_duration& sleep_time)
{
BOOST_MESSAGE("Sleeping a bit");
boost::this_thread::sleep(sleep_time);
BOOST_MESSAGE("Finished taking a nap");
return 42;
}
At some point in code we create a task which creates a future to such an int value which will be set by a packaged_task - like this (worker_queue is a boost::asio::io_service in this example):
boost::unique_future<int> createAsynchronousValue(const boost::posix_time::seconds& sleep)
{
boost::shared_ptr< boost::packaged_task<int> > task(
new boost::packaged_task<int>(boost::bind(do_something, sleep)));
boost::unique_future<int> ret = task->get_future();
// Trigger execution
working_queue.post(boost::bind(&boost::packaged_task<int>::operator (), task));
return boost::move(ret);
}
At another point in code I want to wrap this function to return some higher level object which should also be a future. I need a conversion function which takes the first value and transforms it to another value (in our actual code we have some layering and doing asynchronous RPC which returns futures to responses - these responses should be converted to futures to real objects, PODs or even void future to be able to wait on it or catch exceptions). So this is the conversion function in this example:
float converter(boost::shared_future<int> value)
{
BOOST_MESSAGE("Converting value " << value.get());
return 1.0f * value.get();
}
Then I thought of creating a lazy future as described in the Boost docs to do this conversion only if wanted:
void invoke_lazy_task(boost::packaged_task<float>& task)
{
try
{
task();
}
catch(boost::task_already_started&)
{}
}
And then I have a function (might be a higher level API) to create a wrapped future:
boost::unique_future<float> createWrappedFuture(const boost::posix_time::seconds& sleep)
{
boost::shared_future<int> int_future(createAsynchronousValue(sleep));
BOOST_MESSAGE("Creating converter task");
boost::packaged_task<float> wrapper(boost::bind(converter, int_future));
BOOST_MESSAGE("Setting wait callback");
wrapper.set_wait_callback(invoke_lazy_task);
BOOST_MESSAGE("Creating future to converter task");
boost::unique_future<float> future = wrapper.get_future();
BOOST_MESSAGE("Returning the future");
return boost::move(future);
}
At the end I want to be able to use it like this:
{
boost::unique_future<float> future = createWrappedFuture(boost::posix_time::seconds(1));
BOOST_MESSAGE("Waiting for the future");
future.wait();
BOOST_CHECK_EQUAL(future.get(), 42.0f);
}
But here I end up getting an exception about a broken promise. The reason seems to be pretty clear for me because the packaged_task which does the conversion goes out of scope.
So my questing is: How do I deal with such situations. How can I prevent the task from being destroyed? Is there a pattern for this?
Bests,
Ronny
You need to manage the lifetime of task object properly.
The most correct way is to return boost::packaged_task<float> instead of boost::unique_future<float> from createWrappedFuture(). The caller will be responsible to get future object and to prolongate task lifetime until future value is ready.
Or you can place task object into some 'pending' queue (global or class member) the similar way you did in createAsynchronousValue. But in this case you will need to explcitly manage task lifetime and remove it from queue after completion. So don't think this solution has advantages against returning task object itself.