I want flow output (return type Flow<T>) from a non-flow function (return typeT).
fun getTotalFiles(): Int
// Say, This is a library function it'll return the number of files (Int) in that folder at that specific moment.
//And,
fun getAllFiles(): List<File>
// Say, This is a library function it'll return all the files (List<File>) in that folder.
The files in that folder can and will change in the future.
Now, I want to constantly observe the output, so how do I implement it?
fun getFlowOfTotalFiles(): Flow<Int> =
// A wrapper function that converts the library function return type to an observable flow, Flow<Int>
//And,
fun getFlowOfAllFiles(): Flow<List<File>> =
// A wrapper function that converts the library function return type to an observable flow, Flow<List<File>>
For specifically monitoring a directory for files, you can use WatchService and convert it to a flow with the flow builder. Something like this:
fun getDirectoryMonitorFlow(directory: String) = flow {
FileSystems.getDefault().newWatchService().use { watchService ->
while (true) {
val watchKey = Path.of(directory).register(watchService, ENTRY_CREATE, ENTRY_DELETE, ENTRY_MODIFY)
if (watchKey.pollEvents().isNotEmpty()) {
emit(Unit)
}
yield() // give flow opportunity to be cancelled.
if (!watchKey.reset()) {
println("Directory became unreadable. Finishing flow.")
break
}
}
}
}
.catch { println("Exception while monitoring directory.") }
.flowOn(Dispatchers.IO)
And then your class might look like:
fun getFlowOfTotalFiles(): Flow<Int> = getFlowOfAllFiles()
.map { it.size }
.distinctUntilChanged()
fun getFlowOfAllFiles(): Flow<List<File>> = flow {
emit(Unit) // so current state is always emitted
emitAll(getDirectoryMonitorFlow(directory))
}
.map {
File(directory).listFiles()?.toList().orEmpty()
}
.flowOn(Dispatchers.IO)
.distinctUntilChanged()
Although you might consider making the first flow a private SharedFlow so you aren't running multiple WatchServices to monitor the same directory concurrently.
I believe you need an infinite loop inside a flow builder, something like the following:
fun getFlowOfTotalFiles(): Flow<Int> = flow {
while (true) {
emit(getTotalFiles())
// delays for 5 sec before next request and
// terminates the infinite cycle when a coroutine,
// that collects this Flow, is canceled
delay(5000)
}
}
fun getAllFilesFlow(): Flow<List<File>> = flow {
while (true) {
emit(getAllFiles())
delay(5000)
}
}
Related
Here is a sample of the code flow:
Trigger the process with an API specifying bulkSize and totalRecords.
Use those parameters to acquire data from DB
Create a processor with the bulkSize.
Send both the data and processor into a method which:
-iterates over the resultset, assembles a JSON for each result, calls a method if the final JSON is not empty and adds that final JSON to the process using processor.add() method.
This is where the outcome of the code is split
After this, if the concurrentRequest parameter is 0 or 1 or any value < (totalRecords/bulkSize), the processor.add() line is where the code stalls and never continues to the next debug line.
However, when we increase the concurrentRequest parameter to a value > (totalRecords/bulkSize), the code is able to finish the .add() function and move onto the next line.
My reasoning leads me to believe we might be having issues with our BulkProcessListener which is making the .add() no close or finish like it is supposed to. I would really appreciate some more insight about this topic!
Here is the Listener we are using:
private class BulkProcessorListener implements Listener {
#Override
public void beforeBulk(long executionId, BulkRequest request) {
// Some log statements
}
#Override
public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
// More log statements
}
#Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
// Log statements
}
}
Here is the createProcessor():
public synchronized BulkProcessor createProcessor(int bulkActions) {
Builder builder = BulkProcessor.builder((request, bulkListener) -> {
long timeoutMin = 60L;
try {
request.timeout(TimeValue.timeValueMinutes(timeoutMin));
// Log statements
client.bulkAsync(request, RequestOptions.DEFAULT,new ResponseActionListener<BulkResponse>());
}catch(Exception ex) {
ex.printStackTrace();
}finally {
}
}, new BulkProcessorListener());
builder.setBulkActions(bulkActions);
builder.setBulkSize(new ByteSizeValue(buldSize, ByteSizeUnit.MB));
builder.setFlushInterval(TimeValue.timeValueSeconds(5));
builder.setConcurrentRequests(0);
builder.setBackoffPolicy(BackoffPolicy.noBackoff());
return builder.build();
}
Here is the method where we call processor.add():
#SuppressWarnings("deprecation")
private void addData(BulkProcessor processor, String indexName, JSONObject finalDataJSON, Map<String, String> previousUniqueObject) {
// Debug logs
processor.add(new IndexRequest(indexName, INDEX_TYPE,
previousUniqueObject.get(COMBINED_ID)).source(finalDataJSON.toString(), XContentType.JSON));
// Debug logs
}
I want to have a periodic timer loop (e.g. 1 second intervals). There are many ways to do that, but I haven't found a solution that would be suitable for unit testing.
Timer should be precise
Unit test should be able to skip the waiting
The closest that I came to a solution was to use coroutines: A simple loop with delay, runBlockingTest and advanceTimeBy.
coScope.launch {
while (isActive) {
// do stuff
delay(1000L)
}
}
and
#Test
fun timer_test() = coScope.runBlockingTest {
... // start job
advanceTimeBy(9_000L)
... // cancel job
}
It works to some degree, but the timer is not precise as it does not account for the execution time.
I haven't found a way to query internal timer used in a coroutine scope or a remaining timeout value inside withTimeoutOrNull:
coScope.launch {
withTimeoutOrNull(999_000_000L) { // max allowed looping time
while (isActive) {
// do stuff
val timeoutLeft // How to get that value ???
delay(timeoutLeft.mod(1000L))
}
}
}
Next idea was to use ticker:
coScope.launch {
val tickerChannel = ticker(1000L, 0L, coroutineContext)
var referenceTimer = 0L
for (event in tickerChannel) {
// do stuff
println(referenceTimer)
referenceTimer += 1000L
}
}
However, the connection between TestCoroutineDispatcher() and ticker does not produce right results:
private val coDispatcher = TestCoroutineDispatcher()
#Test timerTest() = runBlockingTest(coDispatcher) {
myTimer.lauchPeriodicJob()
advanceTimeBy(20_000L) // or delay(20_000L)
myTimer.cancelPeriodicJob()
println("end_of_test")
}
rather consistently results in:
0
1000
2000
3000
4000
5000
6000
end_of_test
I am also open for any alternative approaches that satisfy the two points above.
I have a method which iterates through items from cart and places an order for the same using placeOrder.
Once placeOrder is called for all the items in the cart, I want to consolidate and send a single Mono Object summarizing what order went through and which one did not
This code works but is not using parallel execution of placeOrder.
List<Mono<OrderResponse>> orderResponse = new ArrayList<Mono<OrderResponse>>();
OrderCombinedResponse combinedResponse = new OrderCombinedResponse();
//placeIndividualOrder returns Mono<OrderResponse>
session.getCartItems().forEach(cartItem ->
orderResponse.add(placeIndividualOrder(cartItem)));
return Flux.concat(orderResponse).collectList().map(responseList -> {
responseList.forEach(response -> {
//Do transformation to separate out failed and successful order
});
//Return Mono<OrderCombinedResponse> object
return combinedResponse;
});
I am trying the below code to work to have the orders in cart processed in parallel but it does not return any response and just exits
//Return Mono<OrderCombinedResponse> object
return Flux.fromIterable(session.getCartItems()).parallel()
//Call method to place order. This method return Mono<OrderResponse>
.map(cartItem -> placeIndividualOrder(cartItem))
.runOn(Schedulers.elastic())
//
.map(r -> {
r.subscribe(response -> {
//Do transformation to separate out failed and successful order
});
return combinedResponse;
});
since method placeIndivisualOrder() returns Mono, you need to call it with .flatMap(). The .runOn() should go above the call to placeIndivisualOrder(). If it goes after, like in your code above, you make only the subsequent .map() run on the scheduler. Finally, instead of calling subscribe() inside of .map() like you do, you should just call .subscribe() after .flatMap():
return Flux.fromIterable(session.getCartItems()).parallel()
.runOn(Schedulers.elastic())
//Call method to place order. This method return Mono<OrderResponse>
.flatMap(cartItem -> placeIndividualOrder(cartItem))
.sibscribe(response -> {
// do something with response
},
e -> {
// catch and report error
})
There is a Broadcaster, that accepts strings and append them to a StringBuilder.
I want to test it.
I have to use Thread#sleep to wait, while the broadcaster finish processing of strings. I want to remove sleep.
I tried to use Control#debug() unsuccessfully.
public class BroadcasterUnitTest {
#Test
public void test() {
//prepare
Environment.initialize();
Broadcaster<String> sink = Broadcaster.create(Environment.newDispatcher()); //run broadcaster in separate thread (dispatcher)
StringBuilder sb = new StringBuilder();
sink
.observe(s -> sleep(100)) //long-time operation
.consume(sb::append);
//do
sink.onNext("a");
sink.onNext("b");
//assert
sleep(500);//wait while broadcaster finished (if comment this line then the test will fail)
assertEquals("ab", sb.toString());
}
private void sleep(int millis) {
try {
Thread.sleep(millis);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
}
I'm not familiar with Broadcaster (and it's probably deprecated since the question is old), but these 3 ways could be helpful in general:
When testing Project-Reactor's Fluxes and stuff, you're probably better of using their testing library made specially for this. Their reference and the Javadoc on that part are pretty good, and I'll just copy an example that speaks for itself here:
#Test
public void testAppendBoomError() {
Flux<String> source = Flux.just("foo", "bar");
StepVerifier.create(
appendBoomError(source))
.expectNext("foo")
.expectNext("bar")
.expectErrorMessage("boom")
.verify();
}
You could just block() by yourself on the Fluxes and Monos and then run checks. And note that if an error is emitted, this will result in an exception. But have a feeling you'll find yourself needing to write more code for some cases (e.g., checking the Flux has emitted 2 items X & Y then terminated with error) and you'd be then re-implementing StepVerifier.
#Test
public void testFluxOrMono() {
Flux<String> source = Flux.just(2, 3);
List<Integer> result = source
.flatMap(i -> multiplyBy2Async(i))
.collectList()
.block();
// run your asserts on the list. Reminder: the order may not be what you expect because of the `flatMap`
// Or with a Mono:
Integer resultOfMono = Mono.just(5)
.flatMap(i -> multiplyBy2Async(i))
.map(i -> i * 4)
.block();
// run your asserts on the integer
}
You could use the general solutions to async testing like CountDownLatch, but, again, wouldn't recommend and would give you trouble in some cases. For example, if you don't know the number of receivers in advance you'll need to use something else.
Per answer above, I found blockLast() helped.
#Test
public void MyTest()
{
Logs.Info("Start test");
/* 1 */
// Make a request
WebRequest wr1 = new WebRequest("1", "2", "3", "4");
String json1 = wr1.toJson(wr1);
Logs.Info("Flux");
Flux<String> responses = controller.getResponses(json1);
/* 2 */
Logs.Info("Responses in");
responses.subscribe(s -> mySub.myMethod(s)); // Test for strings is in myMethod
Logs.Info("Test thread sleeping");
Thread.sleep(2000);
/* 3 */
Logs.Info("Test thread blocking");
responses.blockLast();
Logs.Info("Finish test");
}
How can I wait until a Promise is resolved before executing the next line of code?
e.g.
var option = null;
if(mustHaveOption){
option = store.find("option", 1).then(function(option){ return option })
}
//wait until promise is resolved before returning this value
return option;
rallrall provided the correct answer in his comment: you can't
The solution for me was to redesign my code to return promises and then the receiving function must evaluate the result something along the lines of:
function a(){
var option = null;
return mustHaveOption ? store.find("option", 1) : false;
}
}
function b(){
res = a();
if (!res){
res.then(function(option){
// see option here
});
}
}
Another key solution for me was to use a hash of promises. One creates an array of all the promises that must be resolve before executing the next code:
Em.RSVP.Promise.all(arrayOfPromises).then(function(results){
//code that must be executed only after all of the promises in arrayOfPromises is resolved
});
It tooks me a while to wrap my head around this async way of programming - but once I did things work quite nicely.
With ES6, you can now use the async/await syntax. It makes the code much more readable:
async getSomeOption() {
var option = null;
if (mustHaveOption) {
option = await store.find("option", 1)
}
}
return option;
PS: this code could be simplified, but I'd rather keep it close from the example given above.
You can start to show a loading gif, then you can subscribe to the didLoad event for the record, inside which you can continue your actual processing..
record = App.User.find(1);
//show gif..
record.on("didLoad", function() {
console.log("ren loaded!");
});
//end gif; continue processing..