I am currently working on a small project using ECS and DirectX12 and wanted to get some advice on if there is a "preferred" way or alternative ways to solving my issue.
Let me give a very basic layout to an entity which can be rendered
<entity name = "Cube">
<component = Transform ..blah blah>
<component = Mesh = "cube.txt"> // holds data about verts/indicies etc.
<component = Materials = "Mat1"> // holds data about the materials etc.
</entity>
Approach 1
When the entities are loaded in, it would load each component. It gets to the mesh component and would now load in the mesh data and create the buffers needed and store them in the render system (by being able to get the render system from the world).
Say the next entity comes along and also wants the same mesh, it would check with the render system to see if it already exists and therefore would not need to load it in again but simply create a new copy for that entity.
Approach 2
Before loading in entities the render system would have a list of meshes to load in upfront (this means some list which would need updating each time a new mesh wants to be included into the system).
Now when the entities are loaded in they can just have a tag to match the tag on the mesh in the render system.
I am not really sure whats the best approach with this as I started working with approach 1 and find the mesh component handles the loading which does seem a simple and straightforward approach, however I have not done anything like this ECS approach before and wanted to get advice as I may have overlooked a far more preferred approach which is more effective.
The concern I am keeping in mind for work to be done later is the ability to handle batching objects using the same mesh types PSO's etc.
If you find that the first approach works for you right now, I'd say keep going with it until you need to definitively replace it with a system that can handle new functionality.
The second approach does sound like the "proper" way to do it, as in commercial game engines a list of assets is loaded in first on engine launch and most operations thereafter are carried out on the assumption of the mesh assets you need already existing (obviously not including things like procedural generation and so forth).
Related
A repository pattern is there to abstract away the actual data source and I do see a lot of benefits in that, but a repository should not use IQueryable to prevent leaking DB information and it should always return domain objects, not DTO's or POCO's, and it is this last thing I have trouble with getting my head around.
If a repository pattern always has to return a domain object, doesn't that mean it fetches way too much data most of the times? Lets say it returns an employee domain object with forty properties and in the service and view layers consuming that object only five of those properties are actually used.
It means the database has fetched a lot of unnecessary data a pumped that across the network. Doing that with one object is hardly noticeable, but if millions of records are pushed across that way and a lot of of the data is thrown away every time, is that not considered bad behavior?
Yes, when adding or editing or deleting the object, you will use the entire object, but reading the entire object and pushing it to another layer which uses only a fraction of it is not utilizing the underline database and network in the most optimal way. What am I missing here?
There's nothing preventing you from having a separate read model (which could a separately stored projection of the domain or a query-time projection) and separating out the command and query concerns - CQRS.
If you then put something like GraphQL in front of your read side then the consumer can decide exactly what data they want from the full model down to individual field/property level.
Your commands still interact with the full domain model as before (except where it's a performance no-brainer to use set based operations).
It seems like you can only log data by return values from train. In many workflows, it might make more sense to directly save images in the middle of a train function (e.g. save images sampled by a generative model or from a vision-based MDP).
Is there a simple way to do this? One idea would be to try to find the log-directory and write to it directly, but would this have issues?
I'm guessing you're asking about using logging images in Trainable:_train().
If you're not in local mode, within the trainable, you can access the self.logdir attribute to write images to. This should automatically be sync'ed back to your head node (if you're running remotely).
There's a pattern I haven't figured out for Component yet:
I have some "live" configuration that requires disk IO (a component) on system-start, and has a dependency on a map of static config (.edn), and after this "live" configuration is instantiated, it won't change or side-effect anything anymore.
For ex: I need to set this up once, and it depends on static config:
(buddy.core.backends/jws
{:secret (buddy.core.keys/public-key
path-to-public-key-from-static-config)})
I would then reuse that backend ad-infinitum, (ex: in buddy.auth.middleware/wrap-authentication), and it doesn't change, nor side-effect.
Possible Solutions
I could make a component that stores this backend at system-start. But this gives up generality, because when I want to add similar "live config", it would have to be explicitly written into the component, and that gives up the spirit of generality that I think Component champions (eg Duct says components define side-effects, and create boundaries to access them)
I could pass a generic component a map of keys - [fn & args] and the fn+args get evaluated and stored in the component. But this feels like it offloads computation to my configuration .edn, and is an anti-pattern. For example:
(private-key priv-path-from-static
(slurp :password-path-from-static))
Should I encode the notion of slurping in my .edn config? I don't want to offload computation to a config file...
The backend and keys can be instantiated on a per-need basis within each component that requires them. IMO, that's too much of computing the exact same thing, when I'd rather it be stored in memory once.
I could have an atom component that holds a map of these "live" config objects, but then they get destructively added in, and my code has lost it's declarative nature.
TL;DR
What's the best way to create configuration at system-start, possibly needing dependencies, and then becoming available to other components as a component, while not giving up the generality which components should have?
In an ideal world, I think the component itself should describe what type of configuration data it needs. (This could be done with a protocol extending the component in question.). When the config component is started, it should look at all other components, get the list of config requirements and resolve it somehow (from a file, a database, etc.).
I've not seen such a library, but Aviso's config library comes close, maybe you can adapt it to your needs.
I need a way for ember router to route to a recursive path.
For Example:
/:module
/:module/:submodule
/:module/:submodule/:submodule
/:module/:submodule/:submodule/...
Can this be done with Embers router, and if so, how?
I've been looking for examples, tearing apart the source, and I've pretty much come to the conclusion, it's not possible.
In a previous question, someone had pointed me to a way to get the url manually and split it, but I'm stuck at creating the state for the router to resolve to.
As of now, in my project, I currently just use the Ember.HashLocation to setup my own state manager.
The reason for the need of this, is because the module definitions are stored in a database, and at any given point a submodule of a submodule, recursively, could be added. So I'm trying to make the Application Engine handle the change.
Do your submodules in the database not have unique IDs? It seems to me that rather than representing your hierarchy in the path, you should just go straight to the appropriate module or submodule. Of course the hierarchy is still in your data model, but it shouldn't have to be represented in your routing scheme. Just use:
/module/:moduleId
/submodule/:submoduleId
And don't encode the hierarchy in the routes. I understand it might be natural to do so, but there's probably not a technical reason to.
If your submodules don't have unique ids, it's maybe a little tougher...you could build a unique ID by concatenating the ancestor ids together (say, with underscores), which is similar to splitting the URL, but a little cleaner probably. I will say that Ember/Ember Data doesn't seem to be too easy to use with entities with composite keys--if everything has a simple numeric key everything becomes easier (anyone want to argue with me on this, please explain to me how!).
DO you mean like this:
App.Router.map(function(match) {
match('/posts').to('blogPosts');
match('/posts/:blog_post_id').to('showBlogPost');
});
I was able to have a single model loaded by using 'Readfile' function in Assimp. It was then assigned to an aiScene pointer. Now i want to load multiple models of same format. How to achieve this? The documentation does not provide enough information on how to do this.
The main goal of the Assimp library is to load and postprocess your assets (e.g. model/scene), and it isn't for general scene-graph management. Usually you load your models into separate iaScene structures and translate them for your scene-graph one-by-one.
You can call ReadFile multiple times on a single Assimp::Importer object, but keep in mind that each invocation will free the previous aiScene. Therefore, the best thing you can do is to translate each scene directly into your own scenegraph as described by tbalazs.
If you really want to stick to aiScene, create a fresh importer object for each scene and keep it alive (i.e. store a list of (scene, importer) tuples somewhere) as long as needed.