I have created my metamodel, called WFG.ecore.
With ATL I managed to transform a bpmn2 file in a WFG model. The ATL transformation gives to the object WorkFlow, that is the container of all the other objects in WFG.
Now I would like to modify the object WorkFlow programmatically in Java, but I can not.
How can I delete an object instance from its container, and so from all the occurrences?
Below there is an example with instances
gateways
+--------->+----------+
| |Gateway_1 |
♦ +----------+
+-----------+ ^
|WorkFlow_1 | | nextGateway 0..1
+-----------+ |
♦ +---------+
| | Node_1 |
+---------->+---------+
nodes
I would like to delete the instance Gateway_1, so that it's no more contained in WorkFlow_1, and so that Node_1.getNextGateway->null. I tried to do
WorkFlow_1.getGateways().remove(Gateway_1) but doesn't work
The naive answer is to use EcoreUtil.delete() or the Delete command. Both of these remove the EObject from its container and remove (i.e. null out) any cross references. In general, though, you don't want to do it that way for the following reasons:
Child references. Though EcoreUtil.delete(Gateway_1) will remove Gateway_1 from its container and from the Node_1 reference, it won't remove any cross references to children of Gateway_1 EVEN THOUGH THEY ALSO WILL BE DELETED from their container. So you could end up with dangling references to nonexistent objects that were children of Gateway_1.
Performance. There's no reliable way to find cross references efficiently. That means that every EObject in your model will be checked to see if it has a cross reference to Gateway_1 so that cross reference can be deleted. That makes EcoreUtil.delete() an O(n) operation where n is the number of EObjects in your model.
The best solution is some combination of bidirectional references and reference maps. Either Gateway_1 should know who is cross referencing it, or that information should be accessible elsewhere. That way you can explicitly remove all references to Gateway_1 in an efficient and complete manner.
This answer closely follows this blog post, EMF Dos and Don'ts #11, by Maximilian Koegel and Jonas Helming.
By the way, EcoreUtil.remove() does NOT do the cross reference removal, it just removes the EObject from its container.
DeleteCommand.create(editingDomain, Collections.singleton(Gateway_1));
editingDomain.getCommandStack().execute(command);
And for Node_1:
Node_1.setNextGateway(null);
Related
I am learning concurrency and want to clarify my understanding on the following code example from the Rust book. Please correct me if I am wrong.
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
fn main() {
let data = Arc::new(Mutex::new(vec![1, 2, 3]));
for i in 0..3 {
let data = data.clone();
thread::spawn(move || {
let mut data = data.lock().unwrap();
data[0] += i;
});
}
thread::sleep(Duration::from_millis(50));
}
What is happening on the line let data = data.clone()?
The Rust book says
we use clone() to create a new owned handle. This handle is then moved into the new thread.
What is the new "owned handle"? It sounds like a reference to the data?
Since clone takes a &self and returns a Self, is each thread modifying the original data instead of a copy? I guess that is why the code is not using data.copy() but data.clone() here.
The data on the right side is a reference, and the data on the left is a owned value. There is a variable shadowing here.
[...] what is happening on let data = data.clone()?
Arc stands for Atomically Reference Counted. An Arc manages one object (of type T) and serves as a proxy to allow for shared ownership, meaning: one object is owned by multiple names. Wow, that sounds abstract, let's break it down!
Shared Ownership
Let's say you have an object of type Turtle 🐢 which you bought for your family. Now the problem arises that you can't assign a clear owner of the turtle: every family-member kind of owns that pet! This means (and sorry for being morbid here) that if one member of the family dies, the turtle won't die with that family-member. The turtle will only die if all members of the family are gone as well. Everyone owns and the last one cleans up.
So how would you express that kind of shared ownership in Rust? You will quickly notice that it's impossible to do with only standard methods: you'd always have to choose one owner and everyone else would only have a reference to the turtle. Not good!
So along come Rc and Arc (which, for the sake of this story, serve the exact same purpose). These allow for shared ownership by tinkering a bit with unsafe-Rust. Let's look at the memory after executing the following code (note: the memory layout is for learning and might not represent the exact same memory layout from the real world):
let annas = Rc::new(Turtle { legs: 4 });
Memory:
Stack Heap
----- ----
annas:
+--------+ +------------+
| ptr: o-|-------------->| count: 1 |
+--------+ | data: 🐢 |
+------------+
We see that the turtle lives on the heap... next to a counter which is set to 1. This counter knows how many owners the object data currently has. And 1 is correct: annas is the only one owning the turtle right now. Let's clone() the Rc to get more owners:
let peters = annas.clone();
let bobs = annas.clone();
Now the memory looks like this:
Stack Heap
----- ----
annas:
+--------+ +------------+
| ptr: o-|-------------->| count: 3 |
+--------+ ^ | data: 🐢 |
| +------------+
peters: |
+--------+ |
| ptr: o-|----+
+--------+ ^
|
bobs: |
+--------+ |
| ptr: o-|----+
+--------+
As you can see, the turtle still exists only once. But the reference count was increased and is now 3, which makes sense, because the turtle has three owners now. All those three owners reference this memory block on the heap. That's what the Rust book calls owned handle: each owner of such a handle also kind of owns the underlying object.
(also see "Why is std::rc::Rc<> not Copy?")
Atomicity and Mutability
What's the difference between Arc<T> and Rc<T> you ask? The Arc increments and decrements its counter in an atomic fashion. That means that multiple threads can increment and decrement the counter simultaneously without a problem. That's why you can send Arcs across thread-boundaries, but not Rcs.
Now you notice that you can't mutate the data through an Arc<T>! What if your 🐢 loses a leg? Arc is not designed to allow mutable access from multiple owners at (possibly) the same time. That's why you often see types like Arc<Mutex<T>>. The Mutex<T> is a type that offers interior mutability, which means that you can get a &mut T from a &Mutex<T>! This would normally conflict with the Rust core principles, but it's perfectly safe because the mutex also manages access: you have to request access to the object. If another thread/source currently has access to the object, you have to wait. Therefore, at one given moment in time, there is only one thread able to access T.
Conclusion
[...] is each thread modifying the original data instead of a copy?
As you can hopefully understand from the explanation above: yes, each thread is modifying the original data. A clone() on an Arc<T> won't clone the T, but merely create another owned handle; which in turn is just a pointer that behaves as if it owns the underlying object.
I am not an expert on the standard library internals and I am still learning Rust.. but here is what I can see: (you could check the source yourself too if you wanted).
Firstly, an important thing to remember in Rust is that it is actually possible to step outside the "safe bounds" that the compiler provides, if you know what you're doing. So attempting to reason about how some of the standard library types work internally, with the ownership system as your base of understanding may not make lots of sense.
Arc is one of the standard library types that sidesteps the ownership system internally. It essentially manages a pointer all by itself and calling clone() returns a new Arc that points at the exact same piece of memory the original did.. with an incremented reference count.
So on a high level, yes, clone() returns a new Arc instance and the ownership of that new instance is moved into the left hand side of the assignment. However, internally the new Arc instance still points where the old one did.. via a raw pointer (or as it appears in the source, via a Shared instance, which is a wrapper around a raw pointer). The wrapper around the raw pointer is what I imagine the documentation refers to as an "owned handle".
std::sync::Arc is a smart pointer, one that adds the following abilities:
An atomically reference counted wrapper for shared state.
Arc (and its non-thread-safe friend std::rc::Rc) allow shared ownership. That means that multiple "handles" point to the same value. Whenever a handle is cloned, a reference counter is incremented. Whenever a handle is dropped, the counter is decremented. When the counter goes to zero, the value that the handles were pointing to is freed.
Note that this smart pointer does not call the underlying clone method of the data; in fact, there may doesn't need to be an underlying clone method! Arc handles what happens when clone is called.
What is the new "owned handle"? It sounds like a reference to the data?
It both is and isn't a reference. In the broader programming and English sense of the word "reference", it is a reference. In the specific sense of a Rust reference (&Foo), it is not a reference. Confusing, right?
The second part of your question is about std::sync::Mutex, which is described as:
A mutual exclusion primitive useful for protecting shared data
Mutexes are common tools in multithreaded programs, and are well-described
elsewhere so I won't bother repeating that here. The important thing to note is that a Rust Mutex only gives you the ability to modify shared state. It is up to the Arc to allow multiple owners to have access to the Mutex to even attempt to modify the state.
This is a bit more granular than other languages, but allows for these pieces to be reused in novel ways.
I'm working on a game (and my own custom engine). I have quite a few assets (textures, skeletal animations, etc.) that are used by multiple models and therefore get loaded multiple times.
At first, my ambitions were smaller, game simpler and I could live with a little duplication, so shared_ptr which took care of resource cleanup once the last instance was gone seemed like a good idea. As my game grew, more and more resources got loaded multiple times and all the OpenGL state changing slowed the performance down to a crawl. To solve this problem, I decided to write an asset manager class.
I'm using an unordered_map to store a path to file in std::string and c++11's shared_ptr pointing to the actual loaded asset. If the file is already loaded, I return the pointer, if not, I call the appropriate Loader class. Plain and simple.
Unfortunately, I can't say the same about removal. One copy of the pointer remains in the unordered_map. Currently, I iterate through the entire map and perform .unique() checks every frame. Those pointers that turn out to be unique, get removed from the map, destroying the last copy and forcing the destructor run and do the cleanup.
Iterating through hundreds or thousands of objects is not the most efficient thing to do. (it's not a premature optimization, I am in optimization stage now) Is it possible to somehow override the shared pointer functionality? For example, add an "onLastRemains" event somehow? Maybe I should iterate through part of the unordered_map every frame (by bucket)? Some other way?
I know, I could try to write my own reference counted asset implementation, but all current code I have assumes that assets are shared pointers. Besides, shared pointers are excellent at what they do, so why re-invent the wheel?
Instead of storing shared_ptrs in the asset manager's map(see below, use a regular map), store weak_ptrs. When you construct a new asset, create a shared_ptr with a custom deleter which calls a function in the asset manager which tells it to remove this pointer from it's map. The custom deleter should contain the iterator into the map of this asset and supply that iterator when telling the asset manager to delete it's element from the map. The reason a weak_ptr is used in the map is that any subsequent requests for this element can still be given a shared_ptr (because you can make one from a weak_ptr) but the asset manager doesn't actually have ownership of the asset and the custom deleter strategy will work.
Edit: It was noted below the above technique only works if you use a std::map not a std::unordered_map. My recommendation would be to still do this, and switch to a regular map.
Use a std::unique_ptr in your unordered_map of assets.
Expose a std::shared_ptr with a custom deleter that looks up said pointer in the unordered_map, and either deletes it, or moves it to a second container "to be deleted later". Remember, std::shared_ptr does not have to actually own the data in question! The deleter can do any arbitrary action, and can even be stateful.
This lets you keep O(1) lookups for your assets, bunch cleanup (if you want to) instead of doing cleanup in the middle of other scenes.
You can even support temporary 0 reference count without deleting as follows:
Create a std::make_shared in the unordered_map of assets.
Expose custom std::shared_ptr. These hold a raw T* in the data, and the deleter holds a copy of the std::shared_ptr in the asset map. It "deletes" itself by storing the name (which it also holds) into a central "to be deleted" list.
Then go over said "to be deleted" list and check if they are indeed unique() -- if not, it means someone else in the meantime has spawned one of the "child" std::shared_ptr< T*, std::function<void(T*)>>s.
The only downside to this is that the type of exposed std::shared_ptr is no longer a simple std::shared_ptr.
Perhaps something like this?
shared_ptr<assed> get_asset(string path) {
static map<string, weak_ptr<asset>> cache;
auto ap = cache[path].lock();
if(!ap) cache[path] = ap = load_asset(path);
return ap;
}
Just a heads up, I have received little formal education on this type of design theory, so bear with me if I'm ignorant of some concepts. All of my reasoning comes from a background in C++. I am working on a component based system for use in a game engine, and this is the structure I have come up with.
There are components, which are just data.
There are nodes, which allow access to that data.
There are systems, which operate on nodes only.
There are Entities, which contain these nodes, components, systems, and other entities.
Pretty straightforward, but let's just focus on the components and nodes, which I have a pretty strict set of guidelines for.
a node can provide access to a collection of components within an entity
a node is dependent upon the existence of all its underlying components
a component can exist independent of any node pointing to it.
a system can only have access and act upon nodes
Now any of these nodes and components can be destroyed at any time. I've implemented this for nodes by using a set of intrusive lists to maintain a non-ownership method of iterating across nodes, and then they automatically remove themselves from the list upon their destruction. But now I have a question revolving about the components. On destruction of some component, all nodes must also be destroyed who were dependent upon that component. Normally, the simple fix to an object needing to be destroyed when another is destroyed is ownership, where you simply place the node within the component or dynamically destroy it within that components destructor, but nodes here can reference multiple different components. When an object has multiple owners, normally a ref counting solution like a smart pointer gives ownership to all those objects and is destroyed when all owners are destroyed, but that isn't the case this time. My big question, what do I do in terms of ownership when I have one object that can only exist when all its dependencies exist, and upon the destruction of any dependency, results in the destruction of that dependent object.
example:
Red are components needed for the existence of the second node
What it looks like after either component it depends on is destroyed
Obviously, there are multiple unclean solutions with weak pointers, manual deletions, and lots of checks for an objects existence, but like all issues, I'm wondering if this can be safely achieved by design alone. Again, if this is a very simple or well known concept, please just point me in the right direction.
#Jorgen G Valley - All of the objects are indeed owned by the entity, in that all objects are destroyed on destruction of the containing entity, but nodes, components, systems, and entities should be able to be added or removed at any time dynamically. Here is an example. Start with the world entity, which contains an entity which is one mesh and two vectors. The two vectors are updated independently, but let's say you want to parent them together, you would simply add a node, specify one vector as the parent, and any number of vectors as children. The addition of the node to the entity places it in a non-owning list, allowing the previously existant "Parent" system to iterate through all "Parent" nodes and perform functionality on each parent node. Unparenting the object involves just deleting the node, but then the vector and mesh should still exist. Let's say you want to destroy just that vector and hold onto the mesh for use in another model, than the destruction of that vector should also destroy the parent node, because it no longer references a valid vector.
Here are some visuals:
here is an example of the case above. here
now here is an example of removing the parent node. here
notice that the component stays around because it could be used in other nodes, like in this example, where the render node is using it. The destruction of the node closed the gap in the intrusive list used by the parent system, meaning the parent system only manages whatever other entities have parent nodes.
now here is an example of removing the vector component. here
In this case, all nodes dependent upon that vector must be removed as well, including the parent and render node. There destruction closes the gaps in those respective intrusive lists, and the systems continue on there way. Hopefully this helps illustrate the design i'm trying to achieve.
I think you are complicating things a bit more than you need. You say that nodes and components can be destroyed at any time. Is this really true?
In your text you describe the entity as the owner since you say it contains the components, nodes and systems.
My approach to this would be that a component would only be destroyed when the entity owning it is destroyed. This means that the node, component and system doesn't need to bother about destruction. It is done by the owning object, the entity.
EDIT: If you have situations where the component, node or system can be destroyed without the overlying entity to be destroyed I am intrigued to hear an example. It's a really interesting problem in itself. :)
If a class contains pointer to a singleton class, can it beaggregation?
To my understanding it cannot be a has-a relationship since the class does not make an instance of the singleton class, it is just using it like association relationship.
The title doesn't make 100% complete sense as written. There are singleton classes, but there aren't really singleton relationships. Any relationship can be assigned a multiplicity at either end, so if you mean one-to-one relationships, all you do is assign multiplicity 1 at both ends.
Classes can also have multiplicities. You don't often see this used, except in one case: singletons.
When it comes to A having or containing or referencing B, there are basically three levels of tightness in UML.
Aggregation (unfilled rhomboid arrow head) implies that the containment is not exclusive and that the contained object does not share any aspect of its lifecycle with the containing object. In implementation, this is typically a pointer.
Composition (filled rhomboid arrow head) implies that the contained object gets destroyed when the containing object does. Getting this to work usually means that the containment is exclusive. In implementation, this is often a pointer whose destructor is called in the destructor of the containing class (although it may not be created in the constructor of the containing class).
Directed association or member attribute (same thing in UML) implies that the contained object is part of the state, or a constituent if you will, of the containing object. In implementation, this typically means that the reference is not a pointer, or if it is that the contained object is co-created and -destroyed with the containing object.
Aggregation of a singleton is perfectly permissible (even from several different classes), because aggregation is by definition non-exclusive.
Composition is a bit iffy, unless the containing class is also a singleton.
Attribute / directed association is most likely wrong. If the containing class is a singleton, it doesn't make any sense to make the contained class a singleton as well since that's implied. And if the contained class is used as a member in two different classes, it cannot be a singleton.
In addition to the above, you can also of course add as many Usage relationships as you wish. This is common in all design, and implies that the class at the source end of the relationship calls methods in the class at the target end.
I would say, technically, yes, you can have a member variable that is a pointer to a singleton object and call it aggregation; using the aggregation term doesn't have much meaning once you write the code though. For all intents and purposes, it is just an association.
However, the use of an aggregation association in a diagram may or may not help a viewer of the diagram to comprehend it. That probably depends on who you are going to show it to and what they might understand aggregation to mean.
To actually answer the question in the title (using info from The UML User Guide (2nd Edition):
______________________ ______________________
| | | 1|
| AggregatingClass | | SingletonClass |
|____________________| 0..1|____________________|
| |<>--------| |
|____________________| |____________________|
| | | |
|____________________| |____________________|
(Note the 1 in the upper right hand corner of the singleton class box indicating cardinality.)
There is also an aggregation with an open square instead of a filled.
The open means the first instance does not make the other (but still has a has-a relationship).
You could pretend that it is aggregation, but truth of it is this: singletons have nothing to do with object oriented code. They are a form of global state (just like global variables are).
You might want to consider a different approach to the problem.
P.S some materials that could help:
Global State and Singletons
Don't Look For Things!
I'm starting to develop a graphical engine just for practicing purposes. One of the first questions that arised is either to use handles or smart pointers to refer to my class instances.
From my point of view:
Smart pointers pros: created under demand, they do not have the problem of becoming stale pointers; cons: as they are in a linked list, searching for a pointer is an O(n) operation.
Handles pros: search is O(1), object relocation is O(1); cons: can became stale pointers, creating a new handle forces the system to check for the first NULL entry in the handles table.
Which one to choose? Please explain your selection.
EDITED:
I want to clarify some points after your comments and answers.
I don't mean smart pointers are a linked list in the way of "are represented by a STL linked list". I mean they behave, in some way as a linked list (if you move one object from one memory block to another, you need to iterate the full list of smart pointers to update all references to this object properly -it can be done with a linked list -).
And I don't mean handles exactly as opaque pointers or pointer to implementation models. I mean having a global handle table (an array of pointers) so when I request an object, I get a dereferenceable instance containing the index in this table where the actual pointer to the object can be found. So, if I move the object from one block to another, just updating the pointer entry in the handle table I get all pointers automatically updated at the same time.
Neither of those definitions fit what's normally used. Smart pointers aren't in a linked-list in any way at all. Usually you use the observer pattern to keep a vector of raw pointers to objects that still exist if you need to iterate them or something. Handles as you describe them are pretty much only used for binary compatibility reasons and never in-process.
Use smart pointers, they take care of themselves.
The term "handle" is a broad term that, essentially, means an identifier to an object.
A pointer or smart pointer falls under this definition, so you need to pick a terser term for your Option 2.
"Handle"
|
/------+-------\
/ | \
/ | \
Pointer Reference Other Identififer
| | \
|----+----| `T&` \
| | |---+------|
`T*` `shared_ptr<T>` Text Number (e.g. HWND in WinAPI)
If I assume that you mean some fixed, memory-abstracted "other identifier" then, sure, you can employ this. You don't necessarily have an either/or scenario here. You probably want to use smart pointers anyway (for lifetime management if nothing else), and smart pointers don't need to be in a linked list.
You could have a std::map<your_identifier_type, std::shared_ptr<T> > to map your fixed, user-defined identifier to a [potentially-changing] smart pointer.
Disclaimer: This diagram was hastily drawn and represents my vision of the terminology tree as it stands now, half an hour after getting out of bed. There may be minor discrepancies with other views, but it should give a fairly reliable impression of things.