How do I create an Actor that has non-serialisable dependencies on a remote node? - akka

Suppose you have an Actor, MyActor, which depends on an object which cannot be serialised. Examples include:
a Jackson ObjectMapper, for manipulating Json
a service of some kind obtained from a DI container
The Props for such an actor might look like this in Java:
public static Props props(ObjectMapper m, SomeService s) {
return Props.create(new Creator<MyActor>() {
#Override
public MyActor create() throws Exception {
return new MyActor(m, s);
}
});
}
The dependencies are passed into the constructor of the Actor. The problem is that this will not work in a clustered environment: these objects are not serialisable so trying to create the actor on a remote node will fail.
How do we solve this problem without using static global state?

There can be different types of the solution, it depends on your needs.
You can wrap the service for example in the Cluster Singleton and then send the actor ref to it across the cluster, your actor props then will have signature like this:
public static Props props(ActorRef refToMapperWrapper, ActorRef refToServiceWrapper).
The other solution is to instantiate new service and object mapper on the node you need it. You then should send the objects needed to create Service/ObjectMapper (i.e. constructor args) between the nodes, so these objects should be serialised somehow.
The ObjectMapper should better be created on each of the nodes independently, its configuration however can be sent across the nodes.

Related

Accessing resources from a stack in a CDK app created in another stack within the same app

I am using an in-house construct library which is responsible for creating an ECS cluster and service alongside the task definition to run. Everything is happening inside itself, and after creating all these resources in its constructor, does not return anything (its just a function call provided by that library). I pass my App object as the scope of that stack when I call that function, so besides my stack I created within my CDK code same App object holds both stacks.
My point here in order to pass the name of the task definition to use in my lambda (to be passed as a parameter to RunTask AWS SDK call to start a task), I need the name/ref of that task definition which was created in the in-house construct library.
I wonder what's the best way to access such resources in AWS CDK. Is it ok to use class members as getters to access these resources or ConstructNode is a proper way to handle such a case?
You can add properties to your CDK stacks and reference those in others. Behind the scenes each CloudFormation Stack creates outputs and parameters. (This may not always be ideal, as my particular use case made it more difficult to refactor and update the application.)
// Pipeline contains an ECR repository we'd like to reference elsewhere
public class PipelineStack : Stack
{
public IRepository EcrRepository { get; }
public PipelineStack(Construct parent, string id, IPipelineStackProps props)
: base(parent, id, props)
{
EcrRepository = new Repository(
this,
"EcrRepository",
new RepositoryProps
{
// props
});
// rest of stack...
}
}
// Given IApiStackProps similar to:
public interface IApiStackProps : IStackProps
{
IRepository Repository { get; }
}
// Now we'd like to load an ECR task found in that referenced repository
public class ApiStack : Stack
{
public ApiStack(Construct parent, string id, IApiStackProps props)
: base(parent, id, props)
{
var repo = Repository.FromRepositoryName(
this,
"EcrRepository",
props.Repository.RepositoryName);
var container = new ContainerDefinition(
this,
"ApiContainer",
new ContainerDefinitionProps
{
TaskDefinition = taskDef,
Image = ContainerImage.FromEcrRepository(repo, imageTag.ValueAsString),
});
// rest of stack...
}
}
This allows you to create two stacks quite simply sharing resources:
var app = new App(new AppProps());
var pipelineStack = new PipelineStack(app, "ExamplePipeline", new PipelineStackProps
{
// some props...
});
new ApiStack(app, apiStackName, new ApiStackProps
{
Repository = pipelineStack.EcrRepository,
});
This led me to create a series of stacks like the following diagram with the topmost stacks requiring resources from below:
From personal experience I'd recommend staying away from too many inter-stack dependencies for the aforementioned reasons. Where possible move "shared" or referenced resources into a base stack you can extend (.NET's Lazy<T> made this easy to only create resources when acessed). With this shared base approach the number of stacks will likely decrease and allow for easier long-term support and updating:
Unfortunately I do not have a good public example of this second approach, however, the code snippets above are from my ECS CDK example repo which shares a lot between stacks.

CDI Inject in Stateless Session Beans requests

Currently we have an elaborate POJO object structure for handling a webservice request, called a 'processor'.
Remote and Local EJB's and PersistenceContext called during serving this request are initialized in the statless bean and handed to this 'processors' constructor which is re-created during each webservice request.
If I do not want to revert to JNDI lookups deep down in my 'processor' I keep on dragging around all these EJB's through my code.
Enter CDI. I would like to be able to inject these EJB's whenever I need them in this 'processor'.
However, I also noticed this means that the current 'processor' has to become a CDI bean itselve: so, using the #Inject in the Stateless Session Bean that implements the webservice.
When I do this the entiry lifecycle of the processor becomes bound to the bean and not to the request its serving.
Suddenly I have to take into consideration that I should not retain state (other than the injected objects) in the processor, since this state will be shared between multiple webservice invocations. As a programmer, this is not making my life more easy.
So: how should I go about doing this? I've read about the scoping but I'm not sure how / if this would be helping me.
Example, stateless bean:
#Stateless
#WebService
public class ExampleBean {
#Inject
Processor requestScopedInstance;
int beanDependentScopeCounter;
public String sayHello() {
System.out.println( "bean object id: " + this.toString() );
return requestScopedInstance.sayHello(beanDependentScopeCounter++);
}
}
interface:
public interface Processor {
String sayHello(int beanScopedCounter);
}
Implementation:
public class ProcessorImpl implements Processor {
private int requestScopedCounter = 0;
#Override
public String sayHello(int beanScopedCounter) {
return "test, requestScoped: " + requestScopedCounter++ + ", beansScoped: " + beanScopedCounter;
}
}
When I do this the entiry lifecycle of the processor becomes bound to the bean and not to the request its serving that is not correct. That is only the case if you don't use #ApplicationScoped, #SessionScoped, #RequestScoped.
So:
Annotate your processor with #RequestScoped.
You don't need to hand over the EJBs, you can just inject them, where needed.
Use #PostConstruct annotated methods for constructor-code which uses injected objects.
stateless POJOs can be annotated #ApplicationScoped, not stateless POJOs can stay dependent-scoped which is default.
That is made possible because proxies are injected, not actual beans. Using these proxies CDI makes sure that the correct scope is used for your particular calls.

Akka clustering and Actor serialization

I am trying to define a ClusterRouterPool that manages a BalancingPool of Actors across my cluster. The actual end Actor that does the work uses the Gson library's Gson class which is not Serializable. When I bring up a 2nd node in the cluster, as it joins the primary node, I get NotSerializableException's thrown on the leader (where the ClusterRouterPool was initialized)
How to work around this? So in order to use clustered actors, every member of the Actors being clustered must be serializable?
Don't serialize your Gson object and recreate it when the object is being created:
public class Worker extends Actor {
private transient Gson gson = new Gson();
// ...
}

DryIOC Event Aggregator

I'm trying to Implement an event aggregator using DryIOC. I have an Event dispatcher as follows:
public class DryIocEventDispatcher : IEventDispatcher
{
private readonly IContainer _container;
public DryIocEventDispatcher(IContainer container)
{
_container = container;
}
public void Dispatch<TEvent>(TEvent eventToDispatch) where TEvent : EventArgs
{
foreach (var handler in _container.ResolveMany<IHandles<TEvent>>())
{
handler.Handle(eventToDispatch);
}
}
}
I have a number of classes that can handle events. Indicated by the following Interface:
public interface IHandles<T> where T : System.EventArgs
{
void Handle(T args);
}
The gist of it is, that when I call the event dispatcher dispatch method, and pass in a type that inherits from EventArgs. It grabs from the IOC Container, all the types that Implement IHandles<> and call the handle method on them.
An event type may be handled by multiple Services. And a service can handle multiple event types. e.g:
public class ScoringService : IHandles<ZoneDestroyedEventArgs>, IHandles<ZoneCreatedEventArgs>
{
public void Handle(ZoneDestroyedEventArgs args)
{
Console.WriteLine("Scoring Service Handled ZoneDestroyed Event");
}
public void Handle(ZoneCreatedEventArgs args)
{
Console.WriteLine("Scoring Service Handled ZoneCreated Event");
}
}
public class RenderingService : IHandles<ZoneDestroyedEventArgs>, IHandles<ZoneCreatedEventArgs>
{
public void Handle(ZoneDestroyedEventArgs args)
{
Console.WriteLine("Rendering Service Handled ZoneDestroyed Event");
}
public void Handle(ZoneCreatedEventArgs args)
{
Console.WriteLine("Rendering Service Handled ZoneCreated Event");
}
}
Services need to do other things as well as handle events (but might not have other interfaces as they are not required). Some services need to be a singleton, and the handling of events should respect the singleton registration. Thus a call to container.Resolve(IHandles<>) should return the Singleton type for that service and not make multiple instances. These services are gathering events from multiple sources and therefore need to maintain internal state before sending them off elsewhere. Therefore different eventhandlers calling different services need to be sent to the same underlying instance.
I would like to be able to add IHandles interfaces to any service, and have it picked up automatically without having to fiddle with IOC mappings every time. Ideally service types should be added using convention based mapping as well.
So far I've been working on this for two days. I gave up trying to achieve it with structuremap. Now I'm trying DryIOC - but finding it to be even more difficult to understand and get right.
It is pretty easy to do in DryIoc (I am an owner). Here I will speak about V2 RC version.
Given that you've replaced IContainer dependency with IResolver which is automatically injected:
var container = new Container();
container.Register<IEventDispatcher, DryIocEventDispatcher>();
container.RegisterMany<ScoringService>(Reuse.Singleton);
container.RegisterMany<RenderingService>();
var eventDispatcher = container.Resolve<IEventDispatcher>();
eventDispatcher.Dispatch(new ZoneDestroyedEventArgs());
eventDispatcher.Dispatch(new ZoneCreatedEventArgs());
RegisterMany will take care of handlers being reused as Singleton and will return the same instance for both Handles<> interfaces.
Additionally you may use RegisterMapping to add/map IHandles<> service to already registered implementation.
DryIoc has even more to help with EventAggregator implementation.
Also here is solution for problem similar to yours.
The gist with your working example.

PersistenceContextType.EXTENDED inside Singleton

We are using Jboss 7.1.1 in an application mostly generated by Jboss Forge, however we added a repository layer for all domain related code.
I was trying to create a Startup bean to initialize the database state. I would like to use my existing repositories for that.
My repositories all have an extended PersistenceContext injected into them. I use these from my View beans that are #ConversationScoped #Stateful beans, by using the extended context my entities remain managed during a conversation.
First I tried this:
#Startup
#Singleton
public class ConfigBean {
#Inject
private StatusRepository statusRepository;
#Inject
private ZipCode zipCodeRepository;
#PostConstruct
public void createData() {
statusRepository.add(new Status("NEW"));
zipCodeRepository.add(new ZipCode("82738"));
}
}
Example repository:
#Stateful
public class ZipCodeRepository {
#PersistenceContext(PersistenceContextType.EXTENDED)
private EntityManger em;
public void add(ZipCode zipCode) {
em.persist(zipCode);
}
....
}
This ends up throwing an javax.ejb.EJBTransactionRolledbackException on Application startup with the following message:
JBAS011437: Found extended persistence context in SFSB invocation call stack but that cannot be used because the transaction already has a transactional context associated with it. This can be avoided by changing application code, either eliminate the extended persistence context or the transactional context. See JPA spec 2.0 section 7.6.3.1.
I struggled finding a good explanation for this, and actually figured that since EJB's and their injection are handled by proxies all the PersistenceContext injection and propagation would be handled automatically. I guess I was wrong.
However, while on this trail of thought I tried the following:
#Startup
#Singleton
public class ConfigBean {
#Inject
private SetupBean setupBean;
#PostConstruct
public void createData() {
setupBean.createData();
}
#Stateful
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public static class SetupBean {
#Inject
private StatusRepository statusRepository;
#Inject
private ZipCode zipCodeRepository;
public void createData() {
statusRepository.add(new Status("NEW"));
zipCodeRepository.add(new ZipCode("82738"));
}
}
}
This does the trick. All I did was wrap the code in a Stateful SessionBean that is a static inner class of my Singleton bean.
Does anyone understand this behavior? Because while everything works now I'm still a bit estranged as to why it works this way.
A container-managed extended persistence context can only be initiated
within the scope of a stateful session bean. It exists from the point
at which the stateful session bean that declares a dependency on an
entity manager of type PersistenceContextType.EXTENDED is created, and
is said to be bound to the stateful session bean.
From the posted code, it seems ZipCodeRepository isn't itself stateful bean, but you're calling it from one such bean.
In this case, you are initiating PersistenceContextType.TRANSACTION from ConfigBean & propogates through ZipCodeRepository having PersistenceContextType.EXTENDED & it tries to join the transaction, hence the exception.
Invocation of an entity manager defined with PersistenceContext- Type.EXTENDED will result in the use of the existing extended
persistence context bound to that component.
When a business method of the stateful session bean is invoked, if the stateful session bean uses container managed transaction
demarcation, and the entity manager is not already associated with the
current JTA transaction, the container associates the entity manager
with the current JTA transaction and calls
EntityManager.joinTransaction. If there is a different persistence
context already associated with the JTA transaction, the container
throws the EJBException.
While in later case, you're creating a new transaction in SetupBean for each invocation with TransactionAttributeType.REQUIRES_NEW which is of extended type, as it's a stateful bean.
Therefore, adding SetupBean as stateful session bean initiating new transaction for each invocation & later calling ZipCodeRepository doesn't result in exception. ZipCodeRepository will join the same transaction as initiated by SetupBean.