How are aggregates instantiated to test other aggregates with? - unit-testing

Suppose I have an aggregate that, for some operation, requires the existence of another aggregate. Let's assume I have a car and a garage. There might be a command called ParkInGarage that looks like this:
public class ParkInGarage {
#TargetAggregateIdentifier
public final UUID carId;
public final Garage garage;
//... constructor omitted
}
I've read that to validate the existence of an aggregate, it is good practice to use the loaded aggregate in commands since that already implies its existence (as opposed to passing a garageId).
Now when unit-testing the Car using Axon's fixtures, I can not simply instantiate my Garage by saying new Garage(buildGarageCmd). It will say:
java.lang.IllegalStateException: Cannot request current Scope if none is active
Because no infrastructure was set up.
How would I test such a case, or should I design the aggregate differently?
Abstracted, real-world example
The aggregate root I am working with may have a reference to itself to form a tree-structure of said aggregate root. Let's call it Node.
#Aggregate
public class Node {
private Node parentNode;
}
Upon creation, I can pass an Optional<Node> as parent, or set the parent at a later time using a separate command. Whether the parent should be defined as instance or by ID is part of the question.
public class AttachNodeCmd {
#TargetAggregateIdentifier
public final UUID nodeId;
public final Optional<Node> parentNode;
}
In the command handler, I need to check if attaching the node to given parent would introduce a cycle (the structure is supposed to be a tree, not a common graph).
#CommandHandler
public Node(AttachNodeCmd command) {
if (command.parentNode.isPresent()) {
Node currentNode = command.parentNode.get();
while (currentNode != null) {
if (currentNode.equals(this)) throw new RecursionException();
currentNode = currentNode.parentNode.orElse(null);
}
}
//Accept the command by applying() an Event
}
At some point, the parent needs to be instantiated to perform those checks. This could either be done by supplying the aggregate instance in the command (discouraged), or by supplying a Repository<Node> and the nodeId to the command handler, which is the aggregate itself and also discouraged. Currently I don't see a right way to do this and further down the road a way to test it.

I wouldn't put AR instances in commands. Command schemas should be stable and easy to serialize/reserialize as they are message contracts.
What you could do instead is resolving the dependency in the command handler.
//ParkInGarage command handler
Garage garage = garageRepository.garageOfId(command.garageId);
Car car = carRepository.carOfId(command.carId);
car.parkIn(garage);
I don't know Axon Framework at all, but that should be relatively easy to test now.

I think #plalx is putting you on the right track. Commands are part of your API/Message Contract and exposing the Aggregate in there isn't that great an idea.
Additionally I'd like to note that the AggregateFixtures in Axon are there to test a single Aggregate, not the coordination of operations between Aggregates.
Coordination between aggregates/bounded contexts is typically where you see sagas coming in to play. Now to be honest, I am a bit in doubt whether this use case justifies a Saga, but I could imagine that if the ParkCarInGarageCommand fails because the Garage Aggregate is full (for example), that you need to instruct the Car Aggregate through another command telling it it's a no-go. The Saga set up in Axon might help you with this as you can easily catch (1) the exception from handling the command or (2) handle the event notifying the operation wasn't successful.

Related

How to chain planners from JDBC Adapter SchemaFactory?

I extended JDBC adapter and used a model.json configuration custom schema factory with 1 original schema and 2 derived schemas to add rules and that worked, rules got executed on original schema during planning, but their end-result didn't get chosen as the best option by the Volcano planner because it's too expensive. Rules transformed RelNode to execute on 2 derived schemas. More details below and in code.
1) Can I tell Volcano planner to ignore 1 out of 3 schemas that I passed through custom JDBC SchemaFactory?
I want the parser to work on that 1 original schema, but for the planner to never suggest an optimal (cheapest) plan in that schema (only other 2 derived schemas). 1 original schema is always mapped 1-to-1 with other 2 derived schemas, so the RelNode that my rule returns is always semantically equivalent, just more expensive (security reasons).
2) If that can't work, how can I call HepPlanner instead of default Volcano planner from SchemaFactory that is set in model.json, since that's my starting point?
You can find my entire code on GitHub, I made it publicly available so that everyone can have a better starting point with Calcite than I did.
Here is the link: https://github.com/igrgurina/multicloud_rewriter
Calcite library is amazing, but it's really hard to get into because it lacks examples and tutorials for common tasks.
Ideally, I would have HepPlanner execute my rules that transform them to semantically equivalent expressions that use 2 derived schemas instead of 1 original schema (I have a rule that does that), and then have Volcano planner optimize that using only 2 derived schemas, without having an idea that 1 original schema exists, due to security reasons.
I haven't found any reasonable examples that demonstrate how to do that so any help would be appreciated (please don't post links to Druid example, or Apache Calcite docs website, I went through them a thousand times).
I've managed to make this work by using Hook.PROGRAM and prepending my custom program that executes my rules before all others.
Since Hook is marked as for testing and debugging only in Calcite library, I would say this is not how it's supposed to be done, but I have nothing better at the moment.
Here is a short summary with code sample:
public static class MultiCloudHookManager {
private static final Program PROGRAM = new MultiCloudProgram();
private static Hook.Closeable globalProgramClosable;
public static void addHook() {
if (globalProgramClosable == null) {
globalProgramClosable = Hook.PROGRAM.add(program());
}
}
private static Consumer<Holder<Program>> program() {
return prepend(PROGRAM);
}
// this doesn't have to be in the separate program
private static Consumer<Holder<Program>> prepend(Program program) {
return (holder) -> {
if (holder == null) {
throw new IllegalStateException("No program holder");
}
Program chain = holder.get();
if (chain == null) {
chain = Programs.standard();
}
holder.set(Programs.sequence(program, chain));
};
}
}
The MultiCloudHookManager is then used in SchemaFactory, where you simply call MultiCloudHookManager.addHook() method. In this case, MultiCloudHookManager.PROGRAM is set to MultiCloudProgram, that simply executes a set of rules in HepPlanner.
For full details, refer to the source code in GitHub repository.
This hack solution is inspired by another library.

Should instance fields access be synchronized in a Tapestry page or component?

If a page or component class has one instance field which is a non-synchronized object, f.ex. an ArrayList, and the application has code that structurally modifies this field, should the access to this field be synchronized ?
F.ex.:
public class MyPageOrComponent
{
#Persist
private List<String> myList;
void setupRender()
{
if (this.myList == null)
{
this.myList = new ArrayList<>();
}
}
void afterRender(MarkupWriter writer)
{
// Should this be synchronized ?
if (someCondition)
{
this.myList.add(something);
}
else
{
this.myList.remove(something);
}
}
}
I'm asking because I seem to understand that Tapestry creates only one instance of a page or component class and it uses this instance for all the connected clients (but please correct me if this is not true).
In short the answer is no, you don't have to because Tapestry does this for you. Tapestry will transform your pages and classes for you at runtime in such a way that wherever you interact with your fields, they will not actually be working on the instance variable but on a managed variable that is thread safe. The full inner workings are beyond me, but a brief reference to the transformation can be found here.
One warning, don't instantiate your page/component variables at decleration. I have seen some strange behaviour around this. So don't do this:
private List<String> myList = new ArrayList<String>;
Tapestry uses some runtime byte code magic to transform your pages and components. Pages and components are singletons but the properties are transformed so that they are backed by a PerThreadValue. This means that each request gets it's own copy of the value so no synchronization is required.
As suggested by #joostschouten you should never initialize a mutable property in the field declaration. The strange behaviour he discusses is caused beacause this will be shared by all requests (since the initializer is only fired once for the page/component singleton). Mutable fields should instead be initialized in a render method (eg #SetupRender)

Am I violating an OOP design guideline here? Couple of interesting design pickles

I'm designing a new power-up system for a game I'm creating. It's a side scroller, the power ups appear as circular objects and the player has to touch / move through them to pick up their power. The power up then becomes activated, and deactivates itself a few seconds later. Each power-up has its own duration defined. For simplicity's sake the power ups are spawned (placed on the screen) every X seconds.
I created a PowerUpManager, a singleton whose job is to decide when to create new power ups and then where to place them.
I then created the Powerup base class, and a class that inherits from that base class for every new Powerup. Every Power-up can be in one of three states: Disabled, placed on the screen, and picked up by the player. If the player did not pick up the power up but moved on, the power up will exit the screen and should go back from the placed state to the disabled state, so it can be placed again.
One of the requirements (that I) put in place is that there should be minimal code changes when I code up a new Power up class. The best I could do was one piece of code: The PowerUpManager's constructor, where you must add the new power-up to the to the container that holds all power-ups:
PowerupManager::PowerupManager()
{
available = {
new PowerupSpeed(),
new PowerupAltWeapon(),
...
};
}
The PowerUpManager, in more details (Question is coming up!):
Holds a vector of pointers to PowerUp (The base class) called available. This is the initial container that holds one copy of each power up in the game.
To handle the different states, it has a couple of lists: One that holds pointers to currently placed power ups, and another list that holds pointers to currently active power ups.
It also has a method that gets called every game tick that decides if and where to place a new power up and clean up power ups that weren't picked up. Finally it has a method that gets called when the player runs into a power up, that activates the power up (Moves it from the placed to the active list, and calls the power up's activate method).
Finally, once you understand the full picture, the question:
I needed a way for client code to ask if a particular power-up is currently active. For example: The player has a weapon, but there is a power up that replaces that weapon temporarily. Where I poll for input and recognize that the player wants to fire his weapon, I need to call the correct fire method - The alternative weapon power up fire method, and not the regular weapon fire method.
I thought of this particular demand for a while and came up with this:
template <typename T>
T* isActivated() // Returns a pointer to the derived Powerup if it exists in the activated list, or nullptr if it doesn't
{
for(Powerup *i : active) // Active is a list of currently active power ups
{
T *result = dynamic_cast<T*>(i);
if(result)
return result;
}
return nullptr;
}
So client code looks like this:
PowerUpAltWeapon *weapon = powerUpManager->isActivated<PowerUpAltWeapon>();
if(weapon)
...
I thought the solution is elegant and kind of neat, but essentially what it is is trying to convert a base type to a derived type. If that doesn't work, you try the next derived type... A long chain of if / else if, it's just disguised in a loop. Does this violate the guideline that I just described? Not casting a base type to all of its derived types in a long chain of if / else if until you get a hit? Is there another solution?
A secondary question is: Is there a way to get rid of the need to construct all the different power ups in the PowerupManager constructor? That is currently the only place you need to make a change if you want to introduce a new power up. If I can get rid of that, that'd be interesting...
This is based on your design, but if it was me I choose an ID for each PowerUp and a set of IDs in the client, and each time a user posses a PowerUp that ID will be added to its set and ... you know the rest. Using this technique I can do fast look up for every PowerUp and avoid dynamic_cast:
std::set<PowerUp::ID> my_powerUps;
template< class T > bool isActivated() {
return my_powerUps.find( T::id() ) != my_powerUps.end();
}
And about your second question, I have a similar program that load some plugins instead of PowerUp, I have a pure virtual base class that contain all methods that required by that plugin and implement it in shared modules and then at startup I load them from an specific folder. For example each shared module contain a create_object that return a plugin* (in your case PowerUp* of course) and then I iterate the folder, load modules and call create_object to create my plugins from them and register them in my plugin_manager

loading classes with jodd and using them in drools

I am working on a system that uses drools to evaluate certain objects. However, these objects can be of classes that are loaded at runtime using jodd. I am able to load a file fine using the following function:
public static void loadClassFile(File file) {
try {
// use Jodd ClassLoaderUtil to load class into the current ClassLoader
ClassLoaderUtil.defineClass(getBytesFromFile(file));
} catch (IOException e) {
exceptionLog(LOG_ERROR, getInstance(), e);
}
}
Now lets say I have created a class called Tire and loaded it using the function above. Is there a way I can use the Tire class in my rule file:
rule "Tire Operational"
when
$t: Tire(pressure == 30)
then
end
Right now if i try to add this rule i get an error saying unable to resolve ObjectType Tire. My assumption would be that I would somehow need to import Tire in the rule, but I'm not really sure how to do that.
Haven't use Drools since version 3, but will try to help anyway. When you load class this way (dynamically, in the run-time, no matter if you use e.g. Class.forName() or Jodd), loaded class name is simply not available to be explicitly used in the code. I believe we can simplify your problem with the following sudo-code, where you first load a class and then try to use its name:
defineClass('Tire.class');
Tire tire = new Tire();
This obviously doesn't work since Tire type is not available at compile time: compiler does not know what type you gonna load during the execution.
What would work is to have Tire implementing some interface (e.g. VehiclePart). So then you could use the following sudo-code:
Class tireClass = defineClass('Tire.class');
VehiclePart tire = tireClass.newInstance();
System.out.println(tire.getPartName()); // prints 'tire' for example
Then maybe you can build your Drools rules over the interface VehiclePart and getPartName() property.
Addendum
Above make sense only when interface covers all the properties of dynamically loaded class. In most cases, this is not a valid solution: dynamically loaded classes simply do not share properties. So, here is another approach.
Instead of using explicit class loading, this problem can be solved by 'extending' the classloader class path. Be warn, this is a hack!
In Jodd, there is method: ClassLoaderUtil.addFileToClassPath() that can add a file or a path to the classloader in the runtime. So here are the steps that worked for me:
1) Put all dynamically created classes into some root folder, with the respect of their packages. For example, lets say we want to use a jodd.samples.TestBean class, that has two properties: number (int) and a value (string). We then need to put it this class into the root/jodd/samples folder.
2) After building all dynamic classes, extend the classloaders path:
ClassLoaderUtil.addFileToClassPath("root", ClassLoader.getSystemClassLoader());
3) load class and create it before creating KnowledgeBuilder:
Class testBeanClass = Class.forName("jodd.samples.TestBean");
Object testBean = testBeanClass.newInstance();
4) At this point you can use BeanUtils (from Jodd, for example:) to manipulate properties of the testBean instance
5) Create Drools stuff and add insert testBean into session:
knowledgeSession.insert(testBean);
6) Use it in rule file:
import jodd.samples.TestBean;
rule "xxx"
when
$t: TestBean(number == 173)
then
System.out.println("!!!");
end
This worked for me. Note that on step #2 you can try using different classloader, but you might need it to pass it to the KnowledgeBuilderFactory via KnowledgeBuilderConfiguration (i.e. PackageBuilderConfiguration).
Another solution
Another solution is to simply copy all object properties to a map, and deal with the map in the rules files. So you can use something like this at step #4:
Map map = new HashMap();
BeanTool.copy(testBean, map);
and later (step #5) add a map to Drools context instead of the bean instance. In this case it would be even better to use defineClass() method to explicitly define each class.

Dependency Injection and Runtime Object Creation

I've been trying to follow the principles of Dependency Injection, but after reading this article, I know I'm doing something wrong.
Here's my situation: My application receives different types of physical mail. All the incoming mail passes through my MailFunnel object.
While it's running, MailFunnel receives different types of messages from the outside: Box, Postcard and Magazine.
Each mail type needs to be handled differently. For example, if a Box comes in, I may need to record the weight before delivering it. Consequently, I have BoxHandler, PostcardHandler and MagazineHandler objects.
Each time a new message comes into my MailFunnel, I instantiate a new corresponding MailHandler object.
For example:
class MailFunnel
{
void NewMailArrived( Mail mail )
{
switch (mail.type)
{
case BOX:
BoxHandler * bob = new BoxHandler(shreddingPolicy, maxWeightPolicy);
bob->get_to_work();
break;
case POSTCARD:
PostcardHandler * frank = new PostcardHandler(coolPicturePolicy);
frank->get_to_work();
break;
case MAGAZINE:
MagazineHandler * nancy = new MagazineHandler(censorPolicy);
nancy->get_to_work();
break;
}
}
private:
MaxWeightPolcy & maxWeightPolicy;
ShreddingPolicy & shreddingPolicy;
CoolPicturePolicy & coolPicturePolicy;
CensorPolicy & censorPolicy;
}
On one hand, this is great because it means that if I get five different pieces of mail in, I immediately have five different MailHandlers working concurrently to take care of business. However, this also means that I'm mixing object creation with application logic - a big no-no when it comes to Dependency Injection.
Also, I have all these policy references hanging around in my MailFunnel object that MailFunnel really doesn't need. The only reason that MailFunnel has these objects is to pass them to the MailHandler constructors. Again, this is another thing I want to avoid.
All recommendations welcome. Thanks!
This looks more like a factory to me. Move the invocation of the get_to_work() method out of the invocation and return the handler. The pattern works pretty well for a factory.
class MailHandlerFactory
{
IMailHandler* GetHandler( Mail mail )
{
switch (mail.type)
{
case BOX:
return new BoxHandler(shreddingPolicy, maxWeightPolicy);
break;
case POSTCARD:
return new PostcardHandler(coolPicturePolicy);
break;
case MAGAZINE:
return new MagazineHandler(censorPolicy);
break;
}
}
private:
MaxWeightPolcy & maxWeightPolicy;
ShreddingPolicy & shreddingPolicy;
CoolPicturePolicy & coolPicturePolicy;
CensorPolicy & censorPolicy;
}
class MailFunnel
{
MailHandlerFactory* handlerFactory;
MailFunnel( MailHandlerFactory* factory ) {
handlerFactory = factory;
}
void NewMailArrived( Mail mail ) {
IMailHandler handler = handlerFactory.GetHandler(mail);
handler.get_to_work();
}
}
When you see that switch statement, think polymorphism. This design can only be extended by modification. I'd redo it in such a way that I could add new behavior by adding classes. It's what the open/closed principle is all about.
Why can't you just have three methods that are overloaded which take the different types of mail, and then do the appropriate thing? Or have each type handle itself.
In fact, if you have something like type, chances are you should in fact have different types.
Basically do the following:
1) Make the Mail class abstract.
2) Create a three sub classes of mail, Box, PostCard, and Magazine
3) Give each subclass a method to handle mail, or centralize it in a separate HandlerFactory
4) When passed to the mail funnel, simply have it call the handle mail method, or have the HandlerFactory pass it the mail, and get the appropriate handler back. Again, rather than having awkward switch statements everywhere, use the language, this is what types and method overloading is for.
If your mail handling becomes complex and you want to take it out, you can eventually make a mail handler class and extract those policies into that.
You can also consider using a template method, because the only real difference between each of those seems to be the handler you instance, maybe you could simplify it, such that the mail type determines the handler, the rest of the code is basically the same.
Interesting that you're applying dependency injection to a C++ project; it has been done elsewhere, a quick Google search finds a Google code project Autumn Framework.
But the answer by tvanfosson is what I would suggest trying first, before adopting a new framework.