I am curious about using RxJava to implement best effort retry in Akka, without Persistent Actors. The idea is to use Rx's retry method to keep asking until a response is received from the destination actor.
Other examples of this is hard to find. Are there any Akka gurus out there that could verify this implementation, or point me to a better solution?
Example:
public class RxWithAkka {
private final Logger LOGGER = LoggerFactory.getLogger(getClass());
public static final Timeout TIMEOUT = Timeout.apply(10, TimeUnit.MILLISECONDS);
private final ActorRef actor;
private final ActorSystem actorSystem;
public RxWithAkka(ActorSystem actorSystem) {
this.actorSystem = actorSystem;
this.actor = actorSystem.actorOf(Props.create(MyActor.class));
}
public Observable<Object> ping() {
return createObservable()
.doOnError(t -> LOGGER.warn(t.getMessage()))
.retry();
}
Observable<Object> createObservable() {
return Observable.create(subscriber -> {
LOGGER.info("Send ping");
Patterns.ask(actor, "ping", TIMEOUT)
.onComplete(new OnComplete<Object>() {
#Override
public void onComplete(Throwable failure, Object success) throws Throwable {
if (success != null) {
subscriber.onNext(success);
subscriber.onCompleted();
} else {
subscriber.onError(failure);
}
}
}, actorSystem.dispatcher());
});
}
}
Example actor to demonstrate message not received and timeout:
public class MyActor extends UntypedActor {
private int counter = 0;
#Override
public void onReceive(Object message) throws Exception {
switch (counter++) {
case 0:
// ignore message
break;
case 1:
// timeout
Thread.sleep(200);
break;
default:
getSender().tell("pong", getSelf());
}
}
}
}
Test:
public class RxWithAkkaTest {
#Test
public void testIt() throws Exception {
ActorSystem system = ActorSystem.create("system");
RxWithAkka example = new RxWithAkka(system);
String res = (String) example.ping().toBlocking().first();
assertThat(res).isEqualTo("pong");
}
}
In RxJava, you can use the timeout operator in conjunction with retry.
Related
I have a scenario, which process armeria request, and dispatch some event to guava's EventBus. the problem is I loss the context while process the event in the EventBus handler.
I want to know is there any way to let the event processor access ServiceRequestContext.
class EventListener {
#Subscribe
public void process(SomeCustomizedClass event) {
final ServiceRequestContext context = ServiceRequestContext.currentOrNull();
log.info("process ServiceRequestContext context={}", context);
}
}
register the event handler.
EventBus eventBus = new AsyncEventBus(ThreadPoolTaskExecutor());
eventBus.register(new EventListener());
here is my Armeria service
#Slf4j
public class NameAuthRestApi {
final NameAuthService nameAuthService;
#Post("/auth")
#ProducesJson
public Mono<RealNameAuthResp> auth(RealNameAuthReq req) {
return nameAuthService.auth(NameAuthConverter.CONVERTER.toDto(req))
.handle((result, sink) -> {
if (result.isSuccess()) {
// I post an event here, but the event process couldn't access the ServiceRequestContext
// that's would be the problem.
eventBus.post(new SomeCustomizedClass(result));
final RealNameAuthResp realNameAuthResp = new RealNameAuthResp();
realNameAuthResp.setTradeNo(result.getTradeNo());
realNameAuthResp.setSuccess(true);
sink.next(realNameAuthResp);
sink.complete();
} else {
sink.error(new SystemException(ErrorCode.API_ERROR, result.errors()));
}
});
}
}
You need to do:
public Mono<RealNameAuthResp> auth(ServiceRequestContxt ctx, RealNameAuthReq req) {
// Executed by an EventLoop 1.
// This thread has the ctx in its thread local.
return nameAuthService.auth(NameAuthConverter.CONVERTER.toDto(req))
.handle((result, sink) -> {
// Executed by another EventLoop 2.
// But this doens't.
try (SafeCloseable ignord = ctx.push()) {
if (result.isSuccess()) {
...
} else {
...
}
}
});
}
The problem is that the handle method is executed by another thread that does not have the ctx in its thread local. So, you should manually set the ctx.
You can achieve the same effect using xAsync method with the ctx.eventLoop():
public Mono<RealNameAuthResp> auth(ServiceRequestContxt ctx, RealNameAuthReq req) {
return nameAuthService.auth(NameAuthConverter.CONVERTER.toDto(req))
.handleAsync((result, sink) -> {
if (result.isSuccess()) {
...
} else {
...
}
}, ctx.eventLoop());
}
We have two ways to solve this:
First, use the executor which has the ctx:
ctx.eventLoop().submit(new Task(new Event("eone")));
// If it's blocking task, then we must use ctx.blockingTaskExecutor().
Or, propagate the ctx manually:
#Slf4j
public static class Task implements Runnable {
private final Event event;
private final ServiceRequestContext ctx;
Task(Event event) {
this.event = event;
ctx = ServiceRequestContext.current();
}
#Override
public void run() {
try (SafeCloseable ignored = ctx.push()) {
...
}
}
}
#minwoox, to simplify, my code would be looks like this
public class NameAuthRestApi {
JobExecutor executor = new JobExecutor();
#Post("/code")
public HttpResponse authCode(ServiceRequestContext ctx) {
try (SafeCloseable ignore = ctx.push()) {
executor.submit(new Task(new Event("eone")));
}
return HttpResponse.of("OK");
}
#Getter
#AllArgsConstructor
public static class Event {
private String name;
}
#RequiredArgsConstructor
#Slf4j
public static class Task implements Runnable {
final Event event;
#Override
public void run() {
// couldn't access ServiceRequestContext here
ServiceRequestContext ctx = ServiceRequestContext.currentOrNull();
log.info("ctx={}, event={}", ctx, event);
}
}
public static class JobExecutor {
ExecutorService executorService = Executors.newFixedThreadPool(2);
public void submit(Task task) {
executorService.submit(task);
}
}
}
I am trying to write Unit test to the handler function, I followed the example from the Spring project. Can someone help me why the following test is throwing UnsupportedMediaTypeStatusException?
Thanks
Handler function
public Mono<ServerResponse> handle(ServerRequest serverRequest) {
log.info("{} Processing create request", serverRequest.exchange().getLogPrefix());
return ok().body(serverRequest.bodyToMono(Person.class).map(p -> p.toBuilder().id(UUID.randomUUID().toString()).build()), Person.class);
}
Test Class
#SpringBootTest
#RunWith(SpringRunner.class)
public class MyHandlerTest {
#Autowired
private MyHandler myHandler;
private ServerResponse.Context context;
#Before
public void createContext() {
HandlerStrategies strategies = HandlerStrategies.withDefaults();
context = new ServerResponse.Context() {
#Override
public List<HttpMessageWriter<?>> messageWriters() {
return strategies.messageWriters();
}
#Override
public List<ViewResolver> viewResolvers() {
return strategies.viewResolvers();
}
};
}
#Test
public void handle() {
Gson gson = new Gson();
MockServerWebExchange exchange = MockServerWebExchange.from(
MockServerHttpRequest.post("/api/create")
.body(gson.toJson(Person.builder().firstName("Jon").lastName("Doe").build())));
MockServerHttpResponse mockResponse = exchange.getResponse();
ServerRequest serverRequest = ServerRequest.create(exchange, HandlerStrategies.withDefaults().messageReaders());
Mono<ServerResponse> serverResponseMono = myHandler.handle(serverRequest);
Mono<Void> voidMono = serverResponseMono.flatMap(response -> {
assertThat(response.statusCode()).isEqualTo(HttpStatus.OK);
boolean condition = response instanceof EntityResponse;
assertThat(condition).isTrue();
return response.writeTo(exchange, context);
});
StepVerifier.create(voidMono)
.expectComplete().verify();
StepVerifier.create(mockResponse.getBody())
.consumeNextWith(a -> System.out.println(a))
.expectComplete().verify();
assertThat(mockResponse.getHeaders().getContentType()).isEqualTo(MediaType.APPLICATION_JSON);
}
}
Error Message:
java.lang.AssertionError: expectation "expectComplete" failed (expected: onComplete(); actual: onError(org.springframework.web.server.UnsupportedMediaTypeStatusException: 415 UNSUPPORTED_MEDIA_TYPE "Content type 'application/octet-stream' not supported for bodyType=com.example.demo.Person"))
I found that I missed .contentType(MediaType.APPLICATION_JSON) to my mock request.
MockServerWebExchange.from(
MockServerHttpRequest.post("/api/create").contentType(MediaType.APPLICATION_JSON)
.body(gson.toJson(Person.builder().firstName("Jon").lastName("Doe").build())));
fixed my issue.
Wanting to design my test around a MassTransit Consumer where i can send the consumers messages with a variety of content. Base on the content of the message the consumer will "do work" and relay a messages.
The problem i have is when running two of these test, in separate test fixtures, there seems to be something interfering with the second test. But run individually each test runs successfully.
After looking through the MassTransit Test project i have come up with some example test code to demonstrate the problem i'm having.
[TestFixture]
public class PingPongMessageTestFixture : InMemoryTestFixture
{
private PongConsumer _pongConsumer;
protected override void ConfigureInMemoryReceiveEndpoint(IInMemoryReceiveEndpointConfigurator configurator)
{
_received = Handled<IPongMessage>(configurator);
}
protected override void PreCreateBus(IInMemoryBusFactoryConfigurator configurator)
{
var _pingConsumer = new PingConsumer();
_pongConsumer = new PongConsumer();
configurator.ReceiveEndpoint("test_ping_queue", e =>
{
e.Consumer(() => _pingConsumer);
});
configurator.ReceiveEndpoint("test_pong_queue", e =>
{
e.Consumer(() => _pongConsumer);
});
}
Task<ConsumeContext<IPongMessage>> _received;
[Test]
public async Task test_how_to_test_consumers()
{
await Bus.Publish<IPingMessage>(new { MessageId = 100 });
await _received;
Assert.IsTrue(_pongConsumer.hitme);
Assert.AreEqual(100, _pongConsumer.pongMessage.MessageId);
}
public class PingConsumer : IConsumer<IPingMessage>
{
public Task Consume(ConsumeContext<IPingMessage> context)
{
context.Publish<IPongMessage>(new { context.Message.MessageId });
return Task.CompletedTask;
}
}
public class PongConsumer : IConsumer<IPongMessage>
{
internal bool hitme;
internal IPongMessage pongMessage;
public Task Consume(ConsumeContext<IPongMessage> context)
{
hitme = true;
pongMessage = context.Message;
return Task.CompletedTask;
}
}
public interface IPingMessage
{
int MessageId { get; set; }
}
public interface IPongMessage
{
int MessageId { get; set; }
}
}
this test will send a message to the ping consumer which itself will send a message to the pong consumer.
This by itself works and tests that the ping consumer will send a pong message. In a real life scenario the "ping" consumer to send Update messages to another service and the pong consumer is just a test consumer used with the tests.
If i have a second test fixture, which for this questions is very similar, it will fail when both test are run together. though individually it will pass.
The test does the same thing
[TestFixture]
public class DingDongMessageTestFixture : InMemoryTestFixture
{
private DongConsumer _pongConsumer;
protected override void ConfigureInMemoryReceiveEndpoint(IInMemoryReceiveEndpointConfigurator configurator)
{
_received = Handled<IDongMessage>(configurator);
}
protected override void PreCreateBus(IInMemoryBusFactoryConfigurator configurator)
{
var _dingConsumer = new DingConsumer();
_dongConsumer = new DongConsumer();
configurator.ReceiveEndpoint("test_ding_queue", e =>
{
e.Consumer(() => _dingConsumer);
});
configurator.ReceiveEndpoint("test_dong_queue", e =>
{
e.Consumer(() => _dongConsumer);
});
}
Task<ConsumeContext<IDongMessage>> _received;
[Test]
public async Task test_how_to_test_consumers()
{
await Bus.Publish<IDingMessage>(new { MessageId = 100 });
await _received;
Assert.IsTrue(_pongConsumer.hitme);
Assert.AreEqual(100, _pongConsumer.pongMessage.MessageId);
}
public class DingConsumer : IConsumer<IDingMessage>
{
public Task Consume(ConsumeContext<IDingMessage> context)
{
context.Publish<IDongMessage>(new { context.Message.MessageId });
return Task.CompletedTask;
}
}
public class DongConsumer : IConsumer<IDongMessage>
{
internal bool hitme;
internal IDongMessage pongMessage;
public Task Consume(ConsumeContext<IDongMessage> context)
{
hitme = true;
pongMessage = context.Message;
return Task.CompletedTask;
}
}
public interface IDingMessage
{
int MessageId { get; set; }
}
public interface IDongMessage
{
int MessageId { get; set; }
}
}
Is this a good approach for testing a Masstransit consumers?
If so, do i need to reset the InMemoryTestFixture, somehow, per test fixture?
In your test fixtures, I don't believe there should be any conflict, but because of the interaction with NUnit, there may be something of which I'm unaware because of the base class inheritance that's being used.
If you use the InMemoryTestHarness directly (the same functionality as the text fixtures, but without any testing framework dependency) I would expect that you should not experience any interactions between two simultaneously executing tests.
Your approach is the way it should be done, but again, I'd suggesting using the InMemoryTestHarness instead of the fixture.
An example test is linked: https://github.com/MassTransit/MassTransit/blob/master/src/MassTransit.Tests/Testing/ConsumerTest_Specs.cs
The key to this behaviour lies in the source code for the InMemoryTestFixture.
public class InMemoryTestFixture : BusTestFixture
{
...
[OneTimeSetUp]
public Task SetupInMemoryTestFixture()
{
return InMemoryTestHarness.Start();
}
[OneTimeTearDown]
public async Task TearDownInMemoryTestFixture()
{
await InMemoryTestHarness.Stop().ConfigureAwait(false);
InMemoryTestHarness.Dispose();
}
...
}
As you can see from this snippet, the test harness is started and stopped in [OneTimeSetUp] and [OneTimeTearDown] tags, i.e before any tests in the [TestFixture] are run and after all tests in the fixture are complete - not for each test case.
My solution is to create a new test fixture each time. I believe this is what the writers of MassTransit.TestFramework intended as it is what they do in their Common_SagaStateMachine example.
I am just starting with Akka and have created a test application. In it I create a bunch of actors who create a scheduler to generate a heartbeat event. Upon another type of event, I cancel the scheduler with heartbeat.cancel();, but I'd like to restart it when another event occurs. If I recreate the scheduler I see that the memory consumption increases continuously.
The question then would be either how do I resume the scheduler or how do I dispose the scheduler properly.
This is the code for that Actor
public class Device extends UntypedActor {
enum CommunicationStatus{
OK,
FAIL,
UNKNOWN
}
private static class Heartbeat {
}
public final String deviceId;
private CommunicationStatus commStatus;
private Cancellable heartBeatScheduler;
public Device(String Id)
{
deviceId = Id;
commStatus = CommunicationStatus.UNKNOWN;
}
#Override
public void preStart() {
getContext().system().eventStream().subscribe(getSelf(), DeviceCommunicationStatusUpdated.class);
startHeartbeat();
}
#Override
public void postStop() {
stopHeartBeat();
}
private void startHeartbeat() {
LoggingAdapter log = Logging.getLogger(getContext().system(), this);
log.info("Starting heartbeat");
heartBeatScheduler = getContext().system().scheduler().
schedule(Duration.Zero(),
Duration.create(1, TimeUnit.SECONDS),
getContext().self(),
new Heartbeat(),
getContext().system().dispatcher(),
ActorRef.noSender());
}
private void stopHeartBeat() {
if(!heartBeatScheduler.isCancelled()) {
LoggingAdapter log = Logging.getLogger(getContext().system(), this);
log.info("Stopping heartbeat");
heartBeatScheduler.cancel();
}
}
public String getDeviceId() {
return deviceId;
}
public CommunicationStatus getCommunicationStatus(){
return commStatus;
}
#Override
public void onReceive(Object message) throws Exception {
LoggingAdapter log = Logging.getLogger(getContext().system(), this);
if(message instanceof Heartbeat){
log.info("Pum, pum");
}
else if (message instanceof DeviceCommunicationStatusUpdated){
DeviceCommunicationStatusUpdated event = (DeviceCommunicationStatusUpdated) message;
if(event.deviceId == this.deviceId){
log.info("Received communication status update. '{}' is now {}", deviceId, event.state);
this.commStatus =
event.state == DeviceCommunicationStatusUpdated.State.OK ?
CommunicationStatus.OK : CommunicationStatus.FAIL;
if(commStatus == CommunicationStatus.OK && heartBeatScheduler.isCancelled()){
startHeartbeat();
}
else {
stopHeartBeat();
}
}
}
else unhandled(message);
}
}
Finally there is no leak, it's just that I'm new to Java and was impatient with the garbage collection. In any case, I would like to know about the resetting / restarting of a scheduler.
Can i flush all the logs based on time interval using configuration file. Searched a lot. Didn't find any. Short cut is using Timer ourselves and flush all loggers. But wanted to know whether configuraiton file allows it.
The configuration file options are explained in the LogManager documentation. At this time, the only way to do this is via the configuration file is to use the 'config' option to install your custom code to flush all loggers and perform the timer management. If you need to access to the JVM lifecycle, you can create a custom handler that ignores all log records but listens to constructor and close method calls.
public class FlushAllHandler extends Handler {
private final ScheduledExecutorService ses;
private final Future<?> task;
public FlushAllHandler() {
//TODO: Look these up from the LogManager.
super.setLevel(Level.OFF); //Ignore all published records.
ses = Executors.newScheduledThreadPool(1);
long delay = 1L;
TimeUnit unit = TimeUnit.HOURS;
task = ses.scheduleWithFixedDelay(new Task(), delay, delay, unit);
}
#Override
public void publish(LogRecord record) {
//Allow a trigger filter to kick off a flush.
if (isLoggable(record)) {
ses.execute(new Task());
}
}
#Override
public void flush() {
}
#Override
public void close() throws SecurityException {
super.setLevel(Level.OFF);
task.cancel(false);
ses.shutdown();
try {
ses.awaitTermination(30, TimeUnit.SECONDS);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
ses.shutdownNow();
}
private static class Task implements Runnable {
Task() {
}
#Override
public void run() {
final ArrayList<Handler> handlers = new ArrayList<>();
final LogManager manager = LogManager.getLogManager();
synchronized (manager) { //avoid ConcurrentModificationException
final Enumeration<String> e = manager.getLoggerNames();
while (e.hasMoreElements()) {
final Logger l = manager.getLogger(e.nextElement());
if (l != null) {
Collections.addAll(handlers, l.getHandlers());
}
}
}
//Don't hold LogManager lock while flushing handlers.
for (Handler h : handlers) {
h.flush();
}
}
}
}