Are seda and the actor model essentially equivalent? - concurrency

SEDA is essentially a set of independent "services" that communicate with each other via queues, which could further be abstracted as message passing.
The actor model is a set of independent functions, which communicate with each other through message passing.
Aren't they essentially equivalent? Am I missing some key difference?

From looking at http://www.eecs.harvard.edu/~mdw/proj/seda/ (archived here), they don't seem to be. SEDA could easily be implemented over actor model, but an actor-based application doesn't need to have anything like SEDA's stages.

Related

Would storing a rich object as an actor with persistance be a good idea?

If you are familiar with Trello, would storing an entire Trello board as an actor (with akka persistence) be a good use case?
A trello board consists of:
lists
tasks in a list
each task can have comments and other properties
What are the general best practices or considerations when deciding if akka persistance is a good use case for a given problem set?
Any context where event sourcing is a good fit is a good fit for Akka Persistence.
Event sourcing, in turn, is generally applicable (note that nearly any DB you're using is event sourcing (with exceptionally frequent snapshotting, truncation of the event log, and purging of old snapshots)).
Event sourcing works really well when you want to explicitly model how entities in your domain change over time: you're effectively defining an algebra of changes. The richer (i.e. the further from just create/update) this model of change is, the more it's a fit for event sourcing. This modeling of change in turn facilitates letting other components of a system only update their state when needed.
Akka Persistence, especially when used with cluster sharding, lets you handle commands/requests without having to read from a DB on every command/request (basically, you'll read from the DB when bringing back an already persisted actor, but subsequent commands/requests (until such time as the actor passivates or dies) don't require such reads). The model of parent and child actors in Akka also tends to lead to a natural encoding of many-to-one relationships.
In the example of a trello board, I would probably have
each board be a persistent actor, which is parent to
lists, which are persistent actors and are each parents to
list items, which are also persistent actors
Depending on how much was under a list item, they might in turn have child persistent actors (for comments, etc.).
It's probably worth reading up on domain-driven design. While DDD doesn't require the actor model (nor vice versa), and neither of them requires event sourcing (nor vice versa), I and many others have found that they reinforce each other.
It mostly depends on how much write the app wants to perform.
Akka persistence is an approach to achieve very high write throughput while ensuring the persistence of the data, i.e., if the actor dies and data in memory is lost, it is fine because the write logs are persisted to disk.
If the persistence of the data is necessary, while very high write throughput is not required (imagine the app updates the Trello board 1 time per second), then it is totally fine to simply writing the data to external storage.
would storing an entire Trello board as an actor (with akka
persistence) be a good use case
I would say the size of the actor should match the size of an Aggregate Root. Making an entire board an Aggregate Root seems like a very bad choice. It means that all actions on that board are now serialized and none can happen concurrently. Why should changing description of card #1 conflicts with moving car #2 to a different category? Why should creating a new board category conflict with assigning card #3 to someone?
I mean, you could make an entire system a single actor and you wouldn't ever have to care about race conditions, but you'd also kill your scalability...

Observer pattern in API definition using Franca IDL

I am researching Franca as IDL for automatic API generation. http://franca.github.io/franca/
In my current API, the observer pattern is widely used. Listener classes and callbacks are defined. I do not find a way to actually model that in Franca IDL.
Is it broadcasts the way to model them? If so, aren't the broadcasts supposed to model interaction between a server and a client?
Franca IDL is primarly just what IDL stands for. You can describe an interface that a service offers. The way how clients grant access to the service is, on the technical level, a matter of the actual code generator and framework that is build on top of the France IDL (e.g. CommonAPI).
So your code generator might generate code that makes use of the observer pattern - for example to propagate attribute changes from the service to the clients - but it is not part of the interface description as it should stay independent of the actual technology.
You can also have a look on Franca Deployment model (Franca User Guide Chapter 6) if the generator needs additional information.

Does Akka obsolesce Camel?

My understanding of Akka is that it provides a model whereby multiple, isolated threads can communicate with each other in a highly concurrent fashion. It uses the "actor model", where each thread is an "actor" with a specific job to do. You can orchestrate which messages get passed to which actors under what conditions.
I've used Camel before, and to me, I feel like it's sort of lost its luster/utility now that Akka is so mature and well documented. As I understand it, Camel is about enterprise integration, that is, integrating multiple disparate systems together, usually in some kind of service bus fashion.
But think about it: if I am currently using Camel to:
Poll an FTP server for a file, and once found...
Transform the contents of that file into a POJO, then...
Send out an email if the POJO has a certain state, or
Persist the POJO to a database in all other cases
I can do the exact same thing with Akka; I can have 1 Actor for each of those steps (Poll FTP, transform file -> POJO, email or persist), wire them together, and let Akka handle all the asynchrony/concurrency.
So even though Akka is a concurrency framework (using actors), and even though Camel is about integration, I have to ask: Can't Akka solve everything that Camel does? In ther words: What use cases still exist to use Camel over Akka?
Akka and Camel are two different beasts (besides one is a mountain and one is an animal).
You mention it yourself:
Akka is a tool to implement the reactor pattern, i.e. a message based concurrency engine for potentially distributed systems.
Camel is a DSL/framwork to implement Enterprise Integration Patterns.
There are a lot of things that would be pretty though in Akka, that is easy in Camel. Transactions for sure. Then all the logic various transport logic and options, as Akka does not have an abstraction for an integration message. Then there are well developed EIPs that are great in Camel, the multicast, splitting, aggregation, XML/JSON handling, text-file parsing, HL7, to mention a only a few. Of course, you can do it all in pure java/scala, but that's not the point. The point is to be able to describe the integration using a DSL, not to implement the underlying logic once again.
Non the less, Akka TOGETHER with Camel is pretty interesting. Especially using Scala. Then you have EIP on top of the actor semantics, which is pretty powerful in the right arena.
Example from akka.io
import akka.actor.Actor
import akka.camel.{ Producer, Oneway }
import akka.actor.{ ActorSystem, Props }
class Orders extends Actor with Producer with Oneway {
def endpointUri = "jms:queue:Orders"
}
val sys = ActorSystem("some-system")
val orders = sys.actorOf(Props[Orders])
orders ! <order amount="100" currency="PLN" itemId="12345"/>
A full sample/tutorial can be found at typesafe.

How to monitor communication in a SOA environment with an intermediary?

I'm looking for a possiblity to monitor all messages in a SOA enviroment with an intermediary, who'll be designed to enforce different rule-sets over the message's structure and sequences (e.g., let's say it'll check and ensure that Service A has to be consumed before B).
Obviously the first idea that came to mind is how WS-Adressing might help here, but I'm not sure if it does, as I don't really see any mechanism there to ensure that a message will get delivered via a given intermediary (as it is in WS-Routing, which is an outdated proprietary protocol by Microsoft).
Or maybe there's even a different approach that the monitor wouldn't be part of the route but would be notified on request/responses, which might it then again make somehow harder to actively enforce rules.
I'm looking forward to any suggestions.
You can implement a "service firewall" either by intercepting all the calls in each service as part of your basic servicehost. Alternatively you can use 3rd party solutions and route all your service calls to them (they will do the intercepting and then forward calls to your services).
You can use ESBs to do the routing (and intercepting) or you can use dedicated solutions like IBM's datapower, XML firewall from Layer7 etc.
For all my (technical) services I use messaging and the command processor pattern, which I describe here, without actually calling the pattern name though. I send a message and the framework finds to corresponding class that implements the interface that corresponds to my message. I can create multiple classes that can handle my message, or a single class that handles a multitude of messages. In the article these are classes implementing the IHandleMessages interface.
Either way, as long as I can create multiple classes implementing this interface, and they are all called, I can easily add auditing without adding this logic to my business logic or anything. Just add an additional implementation for every single message, or enhance the framework so it also accepts IHandleMessages implementations. That class can than audit every single message and store all of them centrally.
After doing that, you can find out more information about the messages and the flow. For example, if you put into the header information of your WCF/MSMQ message where it came from and perhaps some unique identifier for that single message, you can track the flow over various components.
NServiceBus also has this functionality for auditing and the team is working on additional tooling for this, called ServiceInsight.
Hope this helps.

Is it possible in Akka to create TypedActors with Proxies of more than one interface?

Akka Typed actors are created in two parts using JDK proxies, whereby the proxy is a product of a specified interface, and the implementation forms the backing managed instance. However this means of construction prevents a TypedActor from implementing multiple Types (interfaces).
I thought I had read someplace that Akka 2.0 was going to change this. Does anyone have any thoughts on this, or how to workaround? FYI, I am using Akka in pure Java, not Scala at this stage
Typed Actors in pre-2.0 are implemented using aspect weaving, and are thus not JDK proxies.
Typed Actors in 2.x are based on JDK proxies and you can essentially use as many interfaces as is supprted by the JDK: Supercharging
There is an official opinion that Typed actors are not the best (see When_to_use_Typed_Actors). If it is possible try to use Untyped actors with typed messages.
We have been using the messages of the kind:
class Contact<T>
class Signal<T>(contact:Contact<T>, data:T)
The instance of contact is easy to check on equality. (if-elseif-elseif) Usually a map of contact-handlers is sufficient to handle all inputs.
The idea of strictly typed signals is further developed in SynapseGrid library. It defines Builder to associate typed handlers with typed contacts.