Simply put I have the following EMF model:
Step
successor
predecessor
ForkStep extends Step
alternateSuccessor
Now there is a bi-directional reference between the successor and its predecessor - No problem so far.
The tricky part is: How to create a bi-directional reference from alternateSuccessor to its super-class predecessor?
Normally each step can only have one predecessor and one successor. But a fork may have two successors (successor and alternateSuccessor).
If I now go on and create a bi-directional reference, EMF generates a new attribute in the super class, which seems not quite right?
Your question can be understood in two ways:
Given a model, how can we detect which step is "default" and which is "fork".
How can we enforce that users are only able to create "default" steps (single successor) or "fork" steps (multiple successors).
Case #1:
You need a single class Step where you introduce a property isForkStep.
The value of this property will be derived using OCL as follows: successor->size() > 1.
First, I created the meta-model (called the file My.ecore) according to your example. I had to add a Scenario class acting as a container for steps.
The meta-model looks as follows:
Then, I opened the My.ecore file using "OCLinEcore editor" and added the OCL code to the isForkStep property.
package test : test = 'test'
{
class Scenario {
property steps : Step[*] { ordered composes };
}
class Step {
attribute stepId : String { id };
attribute isForkStep : Boolean {
derivation: successor->size() > 1;
}
property predecessor#successor : Step[?];
property successor#predecessor : Step[*] { ordered };
}
}
Then, I tested my new meta-model by creating a "dynamic instance" from Scenario where I created 4 steps. Here is a screenshot from that shows the Scenario instance together with 4 steps. As you can see in the "Properties" view, Step 1 has the property "Is Fork Step" set to true (by clicking to the other steps, you would see false).
Case #2:
Here, I modeled the situation differently. I created an interface Step and two implementation classes - DefaultStep and ForkStep.
Then, I added different OCL constraints to both classes.
DefaultStep: successor->size() <= 1
ForkStep: successor->size() >= 2
Note: I might also use successor->size() >= 2 or successor->size() = 0 to allow ForkSteps without successors.
The meta-model looks like this:
There are many other ways how this can be implemented.
For example, you can use Java instead of OCL, or you can model the situation differently.
Btw. a nice slideshow about OCL in EMF can be found here:
http://www.slideshare.net/EdWillink/enriching-withocl
Related
I extended JDBC adapter and used a model.json configuration custom schema factory with 1 original schema and 2 derived schemas to add rules and that worked, rules got executed on original schema during planning, but their end-result didn't get chosen as the best option by the Volcano planner because it's too expensive. Rules transformed RelNode to execute on 2 derived schemas. More details below and in code.
1) Can I tell Volcano planner to ignore 1 out of 3 schemas that I passed through custom JDBC SchemaFactory?
I want the parser to work on that 1 original schema, but for the planner to never suggest an optimal (cheapest) plan in that schema (only other 2 derived schemas). 1 original schema is always mapped 1-to-1 with other 2 derived schemas, so the RelNode that my rule returns is always semantically equivalent, just more expensive (security reasons).
2) If that can't work, how can I call HepPlanner instead of default Volcano planner from SchemaFactory that is set in model.json, since that's my starting point?
You can find my entire code on GitHub, I made it publicly available so that everyone can have a better starting point with Calcite than I did.
Here is the link: https://github.com/igrgurina/multicloud_rewriter
Calcite library is amazing, but it's really hard to get into because it lacks examples and tutorials for common tasks.
Ideally, I would have HepPlanner execute my rules that transform them to semantically equivalent expressions that use 2 derived schemas instead of 1 original schema (I have a rule that does that), and then have Volcano planner optimize that using only 2 derived schemas, without having an idea that 1 original schema exists, due to security reasons.
I haven't found any reasonable examples that demonstrate how to do that so any help would be appreciated (please don't post links to Druid example, or Apache Calcite docs website, I went through them a thousand times).
I've managed to make this work by using Hook.PROGRAM and prepending my custom program that executes my rules before all others.
Since Hook is marked as for testing and debugging only in Calcite library, I would say this is not how it's supposed to be done, but I have nothing better at the moment.
Here is a short summary with code sample:
public static class MultiCloudHookManager {
private static final Program PROGRAM = new MultiCloudProgram();
private static Hook.Closeable globalProgramClosable;
public static void addHook() {
if (globalProgramClosable == null) {
globalProgramClosable = Hook.PROGRAM.add(program());
}
}
private static Consumer<Holder<Program>> program() {
return prepend(PROGRAM);
}
// this doesn't have to be in the separate program
private static Consumer<Holder<Program>> prepend(Program program) {
return (holder) -> {
if (holder == null) {
throw new IllegalStateException("No program holder");
}
Program chain = holder.get();
if (chain == null) {
chain = Programs.standard();
}
holder.set(Programs.sequence(program, chain));
};
}
}
The MultiCloudHookManager is then used in SchemaFactory, where you simply call MultiCloudHookManager.addHook() method. In this case, MultiCloudHookManager.PROGRAM is set to MultiCloudProgram, that simply executes a set of rules in HepPlanner.
For full details, refer to the source code in GitHub repository.
This hack solution is inspired by another library.
I have a game object (a cube, let's say) which exists in the scene, and I want it to have an injectable component. I want to be able to say, for example: My cube has an IShotFirer member, which can resolve to either a BoomShotFirer or a BangShotFirer MonoBehavior component, both of which implement IShotFirer. When binding happens, I want this component to be added to the cube object.
public class CubeBehavior : MonoBehaviour
{
[Inject]
private IShotFirer shotFirer;
}
Is it possible to do this without 1) needing an existing prefab which contains one of these Bang/Boom components, or 2) needing an existing scene object which has one of these components attached?
In other words, I want to be able to dynamically add the component to my game object depending on the bindings, and not relying on anything other than the script files which define either BoomShotFirer or BangShotFirer. But the docs seem to imply that I need to find an existing game object or prefab (e.g. using .FromComponentsInChildren(), etc.)
Is it possible to do this without 1) needing an existing prefab which
contains one of these Bang/Boom components, or 2) needing an existing
scene object which has one of these components attached?
Yes, it is.
Zenject provides a host of helpers that create a new components and bind them -- quoting the docs:
FromNewComponentOnRoot - Instantiate the given component on the root of the current context. This is most often used with GameObjectContext.
Container.BindInterfacesTo<BoomShotFirer>().FromNewComponentOnRoot();
FromNewComponentOn - Instantiate a new component of the given type on the given game object
Container.BindInterfacesTo<BoomShotFirer>().FromNewComponentOn(someGameObject);
FromNewComponentOnNewGameObject - Create a new game object at the root of the scene and add the Foo MonoBehaviour to it
Container.BindInterfacesTo<BoomShotFirer>().FromNewComponentOnNewGameObject();
For bindings like this one that create new game objects, there are also extra bind methods you can chain:
WithGameObjectName = The name to give the new Game Object associated with this binding.
UnderTransformGroup(string) = The name of the transform group to place the new game object under.
UnderTransform(Transform) = The actual transform to place the new game object under.
UnderTransform(Method) = A method to provide the transform to use.
That list is not even exhaustive, be sure to check the readme and the cheatsheet (from both of which I have extracted the info above).
Also understand that, as usual, you can append .AsSingle(), .AsTransient() and .AsCached() to achieve the desired result.
In the first step of development, I design Car and AI as 1 entity.
It works nice (pseudo code):-
for(every entity that is "racing car"){
//^ know type by using flag
// or iterate special component (e.g. "RacingCarComponent")
Entity entity=...
AI* ai=get<AI>(entity);
ai->setInformation(...)
}
for(every entity that is "bicycle"){
Entity entity=...
AI* ai=get<AI>(entity);
ai->setInformation(...) //the info is very different from "racing car"
}
Later, I want a new feature : switch in-out Driver (which effect AI).
I split the entity as shown in the following diagram :-
The above code will be updated to be :-
for(every entity that is "racing car"){
Entity entity=...
AttachAI* aiAttach=get<AttachAI>(entity); //<-- edit
aiAttach->ai->setInformation(...) //<-- edit
}
for(every entity that is "bicycle"){
Entity entity=...
AttachAI* aiAttach=get<AttachAI>(entity); //<-- edit
aiAttach->ai->setInformation(...) //<-- edit
}
Problem
It works nice both before and after the change, but it is hard to maintain.
If there are N types of vehicle in version1 e.g. truck, motercycle, plane, boat, rocket,
I will have to edit N*2 lines of which potentially have already scattered around many .cpp.
Main issue : If I forget to refactor any code, it will still compile fine.
Problem will appear in only run-time.
In real life, I face such issue whenever new design wish to divide an entity into many simpler entities.
The refactoring is always just adding another one indirection.
Question
Suppose that in version1, I don't expect that I will want to switch in/out Driver.
Is it possible to prevent the problem? How?
I may be mistaken, but it seems as though you may be looping through all of the entities multiple times, checking a condition. I am not exactly sure about c++ syntax, so please bear with me:
for (entities as entity) {
info = null;
//Check type to get specific info
if (type is a "racing car"){
info = "fast car";
}
elseif (type is a "bicycle") {
info = "rad spokes";
}
//If we found info, we know we had a valid type
if (info isnt null) {
aiAttach = get(entity);
aiAttach->ai->setInformation(info);
}
}
I'm not sure if the get function requires anything specific for each type. In my pseudocode example, I assume we are only sending the entity and not something type specific. If it does, an additional variable could be used.
It is possible to automatically generate Sitecore templates just coding models? I'm using Sitecore 8.0 and I saw Glass Mapper Code First approach but I cant find more information about that.
Not sure why there isn't much info about it, but you can definitely model/code first!. I do it alot using the attribute configuration approach like so:
[SitecoreType(true, "{generated guid}")]
public class ExampleModel
{
[SitecoreField("{generated guid}", SitecoreFieldType.SingleLineText)]
public virtual string Title { get; set; }
}
Now how this works. The SitecoreType 'true' value for the first parameter indicates it may be used for codefirst. There is a GlassCodeFirstDataprovider which has an Initialize method, executed in Sitecore's Initialize pipeline. This method will collect all configurations marked for codefirst and create it in the sql dataprovider. The sections and fields are stored in memory. It also takes inheritance into account (base templates).
I think you first need to uncomment some code in the GlassMapperScCustom class you get when you install the project via Nuget. The PostLoad method contains the few lines that execute the Initialize method of each CodeFirstDataprovider.
var dbs = global::Sitecore.Configuration.Factory.GetDatabases();
foreach (var db in dbs)
{
var provider = db.GetDataProviders().FirstOrDefault(x => x is GlassDataProvider) as GlassDataProvider;
if (provider != null)
{
using (new SecurityDisabler())
{
provider.Initialise(db);
}
}
}
Furthermore I would advise to use code first on development only. You can create packages or serialize the templates as usual and deploy them to other environment so you dont need the dataprovider (and potential risks) there.
You can. But it's not going to be Glass related.
Code first is exactly what Sitecore.PathFinder is looking to achieve. There's not a lot of info publicly available on this yet however.
Get started here: https://github.com/JakobChristensen/Sitecore.Pathfinder
I am working on a system that uses drools to evaluate certain objects. However, these objects can be of classes that are loaded at runtime using jodd. I am able to load a file fine using the following function:
public static void loadClassFile(File file) {
try {
// use Jodd ClassLoaderUtil to load class into the current ClassLoader
ClassLoaderUtil.defineClass(getBytesFromFile(file));
} catch (IOException e) {
exceptionLog(LOG_ERROR, getInstance(), e);
}
}
Now lets say I have created a class called Tire and loaded it using the function above. Is there a way I can use the Tire class in my rule file:
rule "Tire Operational"
when
$t: Tire(pressure == 30)
then
end
Right now if i try to add this rule i get an error saying unable to resolve ObjectType Tire. My assumption would be that I would somehow need to import Tire in the rule, but I'm not really sure how to do that.
Haven't use Drools since version 3, but will try to help anyway. When you load class this way (dynamically, in the run-time, no matter if you use e.g. Class.forName() or Jodd), loaded class name is simply not available to be explicitly used in the code. I believe we can simplify your problem with the following sudo-code, where you first load a class and then try to use its name:
defineClass('Tire.class');
Tire tire = new Tire();
This obviously doesn't work since Tire type is not available at compile time: compiler does not know what type you gonna load during the execution.
What would work is to have Tire implementing some interface (e.g. VehiclePart). So then you could use the following sudo-code:
Class tireClass = defineClass('Tire.class');
VehiclePart tire = tireClass.newInstance();
System.out.println(tire.getPartName()); // prints 'tire' for example
Then maybe you can build your Drools rules over the interface VehiclePart and getPartName() property.
Addendum
Above make sense only when interface covers all the properties of dynamically loaded class. In most cases, this is not a valid solution: dynamically loaded classes simply do not share properties. So, here is another approach.
Instead of using explicit class loading, this problem can be solved by 'extending' the classloader class path. Be warn, this is a hack!
In Jodd, there is method: ClassLoaderUtil.addFileToClassPath() that can add a file or a path to the classloader in the runtime. So here are the steps that worked for me:
1) Put all dynamically created classes into some root folder, with the respect of their packages. For example, lets say we want to use a jodd.samples.TestBean class, that has two properties: number (int) and a value (string). We then need to put it this class into the root/jodd/samples folder.
2) After building all dynamic classes, extend the classloaders path:
ClassLoaderUtil.addFileToClassPath("root", ClassLoader.getSystemClassLoader());
3) load class and create it before creating KnowledgeBuilder:
Class testBeanClass = Class.forName("jodd.samples.TestBean");
Object testBean = testBeanClass.newInstance();
4) At this point you can use BeanUtils (from Jodd, for example:) to manipulate properties of the testBean instance
5) Create Drools stuff and add insert testBean into session:
knowledgeSession.insert(testBean);
6) Use it in rule file:
import jodd.samples.TestBean;
rule "xxx"
when
$t: TestBean(number == 173)
then
System.out.println("!!!");
end
This worked for me. Note that on step #2 you can try using different classloader, but you might need it to pass it to the KnowledgeBuilderFactory via KnowledgeBuilderConfiguration (i.e. PackageBuilderConfiguration).
Another solution
Another solution is to simply copy all object properties to a map, and deal with the map in the rules files. So you can use something like this at step #4:
Map map = new HashMap();
BeanTool.copy(testBean, map);
and later (step #5) add a map to Drools context instead of the bean instance. In this case it would be even better to use defineClass() method to explicitly define each class.