Manual pause and resume functionality in AWS SWF java framework - amazon-web-services

Does SWF natively supports manual pause and resume workflow functionality in java framework? If not is there any way to achieve to achieve that semantics?
Edit: I implemented following example, seems to be working with initial testing. Is there anything which can break with this. My workflow is going to be long running (~3-5 hours) with same activity being multiple times with different params.
import com.amazonaws.services.simpleworkflow.flow.annotations.Asynchronous;
import com.amazonaws.services.simpleworkflow.flow.core.Promise;
import com.amazonaws.services.simpleworkflow.flow.core.Settable;
public class GreeterWorkflowImpl implements GreeterWorkflow {
private GreeterActivitiesClient operations = new GreeterActivitiesClientImpl();
Settable<Void> paused = new Settable<>();
public void greet() {
Promise<String> fs = getGreeting(0, operations.getName());
print(fs);
}
#Asynchronous
private Promise<String> getGreeting(int count, Promise<String> name)
{
if (count > 10)
return name;
return getGreeting(count, name, paused);
}
#Asynchronous
private Promise<String> getGreeting(int count, Promise<String> name, Settable<Void> paused) {
Promise<String> returnString = operations.getGreeting(name.get());
return getGreeting(count + 1, returnString);
}
#Asynchronous
private void print(Promise<String> finalString)
{
System.out.println("Final String is " + finalString.get());
}
// #Signal method
#Override
public void pause() {
paused = new Settable<>();
}
// #Signal method
#Override
public void resume() {
paused.set(null);
}
}

In case you get multiple signals for resume, you will be setting the paused settable again (which is already ready) so you might end up with unhandled IllegalStateException

Related

Testing a MassTransit Consumer using the InMemoryTestFixture

Wanting to design my test around a MassTransit Consumer where i can send the consumers messages with a variety of content. Base on the content of the message the consumer will "do work" and relay a messages.
The problem i have is when running two of these test, in separate test fixtures, there seems to be something interfering with the second test. But run individually each test runs successfully.
After looking through the MassTransit Test project i have come up with some example test code to demonstrate the problem i'm having.
[TestFixture]
public class PingPongMessageTestFixture : InMemoryTestFixture
{
private PongConsumer _pongConsumer;
protected override void ConfigureInMemoryReceiveEndpoint(IInMemoryReceiveEndpointConfigurator configurator)
{
_received = Handled<IPongMessage>(configurator);
}
protected override void PreCreateBus(IInMemoryBusFactoryConfigurator configurator)
{
var _pingConsumer = new PingConsumer();
_pongConsumer = new PongConsumer();
configurator.ReceiveEndpoint("test_ping_queue", e =>
{
e.Consumer(() => _pingConsumer);
});
configurator.ReceiveEndpoint("test_pong_queue", e =>
{
e.Consumer(() => _pongConsumer);
});
}
Task<ConsumeContext<IPongMessage>> _received;
[Test]
public async Task test_how_to_test_consumers()
{
await Bus.Publish<IPingMessage>(new { MessageId = 100 });
await _received;
Assert.IsTrue(_pongConsumer.hitme);
Assert.AreEqual(100, _pongConsumer.pongMessage.MessageId);
}
public class PingConsumer : IConsumer<IPingMessage>
{
public Task Consume(ConsumeContext<IPingMessage> context)
{
context.Publish<IPongMessage>(new { context.Message.MessageId });
return Task.CompletedTask;
}
}
public class PongConsumer : IConsumer<IPongMessage>
{
internal bool hitme;
internal IPongMessage pongMessage;
public Task Consume(ConsumeContext<IPongMessage> context)
{
hitme = true;
pongMessage = context.Message;
return Task.CompletedTask;
}
}
public interface IPingMessage
{
int MessageId { get; set; }
}
public interface IPongMessage
{
int MessageId { get; set; }
}
}
this test will send a message to the ping consumer which itself will send a message to the pong consumer.
This by itself works and tests that the ping consumer will send a pong message. In a real life scenario the "ping" consumer to send Update messages to another service and the pong consumer is just a test consumer used with the tests.
If i have a second test fixture, which for this questions is very similar, it will fail when both test are run together. though individually it will pass.
The test does the same thing
[TestFixture]
public class DingDongMessageTestFixture : InMemoryTestFixture
{
private DongConsumer _pongConsumer;
protected override void ConfigureInMemoryReceiveEndpoint(IInMemoryReceiveEndpointConfigurator configurator)
{
_received = Handled<IDongMessage>(configurator);
}
protected override void PreCreateBus(IInMemoryBusFactoryConfigurator configurator)
{
var _dingConsumer = new DingConsumer();
_dongConsumer = new DongConsumer();
configurator.ReceiveEndpoint("test_ding_queue", e =>
{
e.Consumer(() => _dingConsumer);
});
configurator.ReceiveEndpoint("test_dong_queue", e =>
{
e.Consumer(() => _dongConsumer);
});
}
Task<ConsumeContext<IDongMessage>> _received;
[Test]
public async Task test_how_to_test_consumers()
{
await Bus.Publish<IDingMessage>(new { MessageId = 100 });
await _received;
Assert.IsTrue(_pongConsumer.hitme);
Assert.AreEqual(100, _pongConsumer.pongMessage.MessageId);
}
public class DingConsumer : IConsumer<IDingMessage>
{
public Task Consume(ConsumeContext<IDingMessage> context)
{
context.Publish<IDongMessage>(new { context.Message.MessageId });
return Task.CompletedTask;
}
}
public class DongConsumer : IConsumer<IDongMessage>
{
internal bool hitme;
internal IDongMessage pongMessage;
public Task Consume(ConsumeContext<IDongMessage> context)
{
hitme = true;
pongMessage = context.Message;
return Task.CompletedTask;
}
}
public interface IDingMessage
{
int MessageId { get; set; }
}
public interface IDongMessage
{
int MessageId { get; set; }
}
}
Is this a good approach for testing a Masstransit consumers?
If so, do i need to reset the InMemoryTestFixture, somehow, per test fixture?
In your test fixtures, I don't believe there should be any conflict, but because of the interaction with NUnit, there may be something of which I'm unaware because of the base class inheritance that's being used.
If you use the InMemoryTestHarness directly (the same functionality as the text fixtures, but without any testing framework dependency) I would expect that you should not experience any interactions between two simultaneously executing tests.
Your approach is the way it should be done, but again, I'd suggesting using the InMemoryTestHarness instead of the fixture.
An example test is linked: https://github.com/MassTransit/MassTransit/blob/master/src/MassTransit.Tests/Testing/ConsumerTest_Specs.cs
The key to this behaviour lies in the source code for the InMemoryTestFixture.
public class InMemoryTestFixture : BusTestFixture
{
...
[OneTimeSetUp]
public Task SetupInMemoryTestFixture()
{
return InMemoryTestHarness.Start();
}
[OneTimeTearDown]
public async Task TearDownInMemoryTestFixture()
{
await InMemoryTestHarness.Stop().ConfigureAwait(false);
InMemoryTestHarness.Dispose();
}
...
}
As you can see from this snippet, the test harness is started and stopped in [OneTimeSetUp] and [OneTimeTearDown] tags, i.e before any tests in the [TestFixture] are run and after all tests in the fixture are complete - not for each test case.
My solution is to create a new test fixture each time. I believe this is what the writers of MassTransit.TestFramework intended as it is what they do in their Common_SagaStateMachine example.

Unit testing of Saga handlers in rebus and correlation issues

I have this simple Saga in Rebus:
public void MySaga : Saga<MySagaData>
IAmInitiatedBy<Event1>
IHandleMessages<Event2>
{
private IBus bus;
private ILog logger;
public MySaga(IBus bus, ILog logger)
{
if (bus == null) throw new ArgumentNullException("bus");
if (logger == null) throw new ArgumentNullException("logger");
this.bus = bus;
this.logger = logger;
}
protected override void CorrelateMessages(ICorrelationConfig<MySagaData> config)
{
config.Correlate<Event>(m => m.MyObjectId.Id, s => s.Id);
config.Correlate<Event>(m => m.MyObjectId.Id, s => s.Id);
}
public Task Handle(Event1 message)
{
return Task.Run(() =>
{
this.Data.Id = message.MyObjectId.Id;
this.Data.State = MyEnumSagaData.Step1;
var cmd = new ResponseCommandToEvent1(message.MyObjectId);
bus.Send(cmd);
});
}
public Task Handle(Event2 message)
{
return Task.Run(() =>
{
this.Data.State = MyEnumSagaData.Step2;
var cmd = new ResponseCommandToEvent2(message.MyObjectId);
bus.Send(cmd);
});
}
}
and thanks to the kind mookid8000 I can test the saga using FakeBus and a SagaFixture:
[TestInitialize]
public void TestInitialize()
{
var log = new Mock<ILog>();
bus = new FakeBus();
fixture = SagaFixture.For<MySaga>(() => new MySaga(bus, log.Object));
idTest = new MyObjectId(Guid.Parse("1B2E7286-97E5-4978-B5B0-D288D71AD670"));
}
[TestMethod]
public void TestIAmInitiatedBy()
{
evt = new Event1(idTest);
fixture.Deliver(evt);
var testableFixture = fixture.Data.OfType<MySagaData>().First();
Assert.AreEqual(MyEnumSagaData.Step1, testableFixture.State);
// ... more asserts
}
[TestMethod]
public void TestIHandleMessages()
{
evt = new Event2(idTest);
fixture.Deliver(evt);
var testableFixture = fixture.Data.OfType<MySagaData>().First();
Assert.AreEqual(MyEnumSagaData.Step2, testableFixture.State);
// ... more asserts
}
[TestCleanup]
public void TestCleanup()
{
fixture.Dispose();
bus.Dispose();
}
The first test method that check IAmInitiatedBy is correctly executed and no error is thrown, while the second test fail. It looks like a correlation issues since fixture.Data contains no elements and in fixture.LogEvents contains as last elements this error: Could not find existing saga data for message Event2/b91d161b-eb1b-419d-9576-2c13cd9d9c51.
What is this GUID? Is completly different from the one I defined in the unit test? Any ideas? Is legal what I'm tryng to test (since I'm using an in-memory bus)?
This line is bad: this.Data.Id = message.MyObjectId.Id. If you checked the value of Data.Id before you overwrote it, you would have noticed that the property already had a value.
You do not assign the saga ID - Rebus does that. And you should leave that property alone :)
Regarding your error - when Rebus wants to log information about a specific message, it logs a short name for the type and the message ID, i.e. the value of the automatically-assigned rbs2-msg-id header. In other words: It's not the value of the property m.MyObjectId.Id, you're seeing, it's the message ID.
Since the saga fixture is re-initialized for every test run, and you only deliver an Event2 to it (which is not allowed to initiate a new instance), the saga will not be hit.

Restarting a cancelled scheduler in akka

I am just starting with Akka and have created a test application. In it I create a bunch of actors who create a scheduler to generate a heartbeat event. Upon another type of event, I cancel the scheduler with heartbeat.cancel();, but I'd like to restart it when another event occurs. If I recreate the scheduler I see that the memory consumption increases continuously.
The question then would be either how do I resume the scheduler or how do I dispose the scheduler properly.
This is the code for that Actor
public class Device extends UntypedActor {
enum CommunicationStatus{
OK,
FAIL,
UNKNOWN
}
private static class Heartbeat {
}
public final String deviceId;
private CommunicationStatus commStatus;
private Cancellable heartBeatScheduler;
public Device(String Id)
{
deviceId = Id;
commStatus = CommunicationStatus.UNKNOWN;
}
#Override
public void preStart() {
getContext().system().eventStream().subscribe(getSelf(), DeviceCommunicationStatusUpdated.class);
startHeartbeat();
}
#Override
public void postStop() {
stopHeartBeat();
}
private void startHeartbeat() {
LoggingAdapter log = Logging.getLogger(getContext().system(), this);
log.info("Starting heartbeat");
heartBeatScheduler = getContext().system().scheduler().
schedule(Duration.Zero(),
Duration.create(1, TimeUnit.SECONDS),
getContext().self(),
new Heartbeat(),
getContext().system().dispatcher(),
ActorRef.noSender());
}
private void stopHeartBeat() {
if(!heartBeatScheduler.isCancelled()) {
LoggingAdapter log = Logging.getLogger(getContext().system(), this);
log.info("Stopping heartbeat");
heartBeatScheduler.cancel();
}
}
public String getDeviceId() {
return deviceId;
}
public CommunicationStatus getCommunicationStatus(){
return commStatus;
}
#Override
public void onReceive(Object message) throws Exception {
LoggingAdapter log = Logging.getLogger(getContext().system(), this);
if(message instanceof Heartbeat){
log.info("Pum, pum");
}
else if (message instanceof DeviceCommunicationStatusUpdated){
DeviceCommunicationStatusUpdated event = (DeviceCommunicationStatusUpdated) message;
if(event.deviceId == this.deviceId){
log.info("Received communication status update. '{}' is now {}", deviceId, event.state);
this.commStatus =
event.state == DeviceCommunicationStatusUpdated.State.OK ?
CommunicationStatus.OK : CommunicationStatus.FAIL;
if(commStatus == CommunicationStatus.OK && heartBeatScheduler.isCancelled()){
startHeartbeat();
}
else {
stopHeartBeat();
}
}
}
else unhandled(message);
}
}
Finally there is no leak, it's just that I'm new to Java and was impatient with the garbage collection. In any case, I would like to know about the resetting / restarting of a scheduler.

RavenDB keeps throwing a ConcurrencyException

I keep getting a ConcurrencyException trying to update the same document multiple times in succession. PUT attempted on document '<id>' using a non current etag is the message.
On every save from our UI we publish an event using MassTransit. This event is sent to the subscriberqueues, but I put the Eventhandlers offline (testing offline subscribers). Once the eventhandler comes online the queue is read and the messages are processed as intended.
However since the same object is in the queue multiple times the first write succeeds, the next doesn't and throws this concurrencyexception.
I use a factory class to have a consistent IDocumentStore and IDocumentSession in all my applications. I specifically set the UseOptimisticConcurrency = false in the GetSession() method.
public static class RavenFactory
{
public static IDocumentStore CreateDocumentStore()
{
var store = new DocumentStore() { ConnectionStringName = "RavenDB" };
// Setting Conventions
store.Conventions.RegisterIdConvention<MyType>((db, cmd, e) => e.MyProperty.ToString());
store.Conventions.RegisterAsyncIdConvention<MyType>((db, cmd, e) => new CompletedTask<string>(e.MyProperty.ToString()));
// Registering Listeners
store
.RegisterListener(new TakeNewestConflictResolutionListener())
.RegisterListener(new DocumentConversionListener())
.RegisterListener(new DocumentStoreListener());
// Initialize and return
store.Initialize();
return store;
}
public static IDocumentSession GetSession(IDocumentStore store)
{
var session = store.OpenSession();
session.Advanced.UseOptimisticConcurrency = false;
return session;
}
}
The eventhandler looks like this. The IDocumentSession gets injected using Dependency Injection.
Here is the logic to get an instance of IDocumentSession.
private static void InitializeRavenDB(IUnityContainer container)
{
container.RegisterInstance<IDocumentStore>(RavenFactory.CreateDocumentStore(), new ContainerControlledLifetimeManager());
container.RegisterType<IDocumentSession, DocumentSession>(new PerResolveLifetimeManager(), new InjectionFactory(c => RavenFactory.GetSession(c.Resolve<IDocumentStore>())));
}
And here is the actual EventHandler which has the ConcurrencyException.
public class MyEventHandler:Consumes<MyEvent>.All, IConsumer
{
private readonly IDocumentSession _session;
public MyEventHandler(IDocumentSession session)
{
if (session == null) throw new ArgumentNullException("session");
_session = session;
}
public void Consume(MyEvent message)
{
Console.WriteLine("MyEvent received: Id = '{0}'", message.MyProperty);
try
{
_session.Store(message);
_session.SaveChanges();
}
catch (Exception ex)
{
var exc = ex.ToString();
// Deal with concurrent writes ...
throw;
}
}
}
I want to ignore any concurrencyexception for now until we can sort out with the business on how to tackle concurrency.
So, any ideas why I get the ConcurrencyException? I want the save to happen no matter whether the document has been updated before or not.
I am unfamiliar with configuring Unity, but you always want Singleton of the IDocumentStore. Below, I have coded the Singleton out manually, but I'm sure Unity would support it:
public static class RavenFactory
{
private static IDocumentStore store;
private static object syncLock = new object();
public static IDocumentStore CreateDocumentStore()
{
if(RavenFactory.store != null)
return RavenFactory.store;
lock(syncLock)
{
if(RavenFactory.store != null)
return RavenFactory.store;
var localStore = new DocumentStore() { ConnectionStringName = "RavenDB" };
// Setting Conventions
localStore .Conventions.RegisterIdConvention<MyType>((db, cmd, e) => e.MyProperty.ToString());
localStore .Conventions.RegisterAsyncIdConvention<MyType>((db, cmd, e) => new CompletedTask<string>(e.MyProperty.ToString()));
// Registering Listeners
localStore
.RegisterListener(new TakeNewestConflictResolutionListener())
.RegisterListener(new DocumentConversionListener())
.RegisterListener(new DocumentStoreListener());
// Initialize and return
localStore.Initialize();
RavenFactory.store = localStore;
return RavenFactory.store;
}
}
// As before
// public static IDocumentSession GetSession(IDocumentStore store)
//
}

Periodic java logging

Can i flush all the logs based on time interval using configuration file. Searched a lot. Didn't find any. Short cut is using Timer ourselves and flush all loggers. But wanted to know whether configuraiton file allows it.
The configuration file options are explained in the LogManager documentation. At this time, the only way to do this is via the configuration file is to use the 'config' option to install your custom code to flush all loggers and perform the timer management. If you need to access to the JVM lifecycle, you can create a custom handler that ignores all log records but listens to constructor and close method calls.
public class FlushAllHandler extends Handler {
private final ScheduledExecutorService ses;
private final Future<?> task;
public FlushAllHandler() {
//TODO: Look these up from the LogManager.
super.setLevel(Level.OFF); //Ignore all published records.
ses = Executors.newScheduledThreadPool(1);
long delay = 1L;
TimeUnit unit = TimeUnit.HOURS;
task = ses.scheduleWithFixedDelay(new Task(), delay, delay, unit);
}
#Override
public void publish(LogRecord record) {
//Allow a trigger filter to kick off a flush.
if (isLoggable(record)) {
ses.execute(new Task());
}
}
#Override
public void flush() {
}
#Override
public void close() throws SecurityException {
super.setLevel(Level.OFF);
task.cancel(false);
ses.shutdown();
try {
ses.awaitTermination(30, TimeUnit.SECONDS);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
ses.shutdownNow();
}
private static class Task implements Runnable {
Task() {
}
#Override
public void run() {
final ArrayList<Handler> handlers = new ArrayList<>();
final LogManager manager = LogManager.getLogManager();
synchronized (manager) { //avoid ConcurrentModificationException
final Enumeration<String> e = manager.getLoggerNames();
while (e.hasMoreElements()) {
final Logger l = manager.getLogger(e.nextElement());
if (l != null) {
Collections.addAll(handlers, l.getHandlers());
}
}
}
//Don't hold LogManager lock while flushing handlers.
for (Handler h : handlers) {
h.flush();
}
}
}
}