I was able to have a single model loaded by using 'Readfile' function in Assimp. It was then assigned to an aiScene pointer. Now i want to load multiple models of same format. How to achieve this? The documentation does not provide enough information on how to do this.
The main goal of the Assimp library is to load and postprocess your assets (e.g. model/scene), and it isn't for general scene-graph management. Usually you load your models into separate iaScene structures and translate them for your scene-graph one-by-one.
You can call ReadFile multiple times on a single Assimp::Importer object, but keep in mind that each invocation will free the previous aiScene. Therefore, the best thing you can do is to translate each scene directly into your own scenegraph as described by tbalazs.
If you really want to stick to aiScene, create a fresh importer object for each scene and keep it alive (i.e. store a list of (scene, importer) tuples somewhere) as long as needed.
Related
for a project, I am trying to create a web-app that, among other things, allows training of machine learning agents using python libraries such as Dedupe or TensorFlow. In cases such as Dedupe, I need to provide an interface for active learning, which I currently realize through jquery based ajax calls to a view that takes and sends the necessary training data.
The problem is that I need this agent object to stay alive throughout multiple view calls and be accessible by each individual call. I have tried realizing this via the built-in cache system using Memcached, but the serialization does not seem to keep all the info intact, and while I am technically able to restore the object from the cache, this appears to break the training algorithm.
Essentially, I want to keep the object alive within the application itself (rather than an external memory store) and be able to access it from another view, but I am at a bit of a loss of how to realize this.
If someone knows the proper technique to achieve this, I would be very grateful.
Thanks in advance!
To follow up with this question, I have since realized that the behavior shown seemed to have been an effect of trying to use the result of a method call from the object loaded from cache directly in the return properties of a view. Specifically, my code looked as follows:
#model is the object loaded from cache
#this returns the wrong object (same object as on an earlier call)
return JsonResponse({"pairs": model.uncertain_pairs()})
and was changed to the following
#model is the object loaded from cache
#this returns the correct object (calls and returns the model.uncertain_pairs() method properly)
uncertain = model.uncertain_pairs()
return JsonResponse({"pairs": uncertain})
I am unsure if this specifically happens due to an implementation from Dedupe or Django side or due to Python, but this has undoubtedly fixed the issue.
To return back to the question, Django does seem to be able to properly (de-)serialize objects and their properties in cache, as long as the cache is set up properly (see Apparent bug storing large keys in django memcached which I also had to deal with)
A repository pattern is there to abstract away the actual data source and I do see a lot of benefits in that, but a repository should not use IQueryable to prevent leaking DB information and it should always return domain objects, not DTO's or POCO's, and it is this last thing I have trouble with getting my head around.
If a repository pattern always has to return a domain object, doesn't that mean it fetches way too much data most of the times? Lets say it returns an employee domain object with forty properties and in the service and view layers consuming that object only five of those properties are actually used.
It means the database has fetched a lot of unnecessary data a pumped that across the network. Doing that with one object is hardly noticeable, but if millions of records are pushed across that way and a lot of of the data is thrown away every time, is that not considered bad behavior?
Yes, when adding or editing or deleting the object, you will use the entire object, but reading the entire object and pushing it to another layer which uses only a fraction of it is not utilizing the underline database and network in the most optimal way. What am I missing here?
There's nothing preventing you from having a separate read model (which could a separately stored projection of the domain or a query-time projection) and separating out the command and query concerns - CQRS.
If you then put something like GraphQL in front of your read side then the consumer can decide exactly what data they want from the full model down to individual field/property level.
Your commands still interact with the full domain model as before (except where it's a performance no-brainer to use set based operations).
It seems like you can only log data by return values from train. In many workflows, it might make more sense to directly save images in the middle of a train function (e.g. save images sampled by a generative model or from a vision-based MDP).
Is there a simple way to do this? One idea would be to try to find the log-directory and write to it directly, but would this have issues?
I'm guessing you're asking about using logging images in Trainable:_train().
If you're not in local mode, within the trainable, you can access the self.logdir attribute to write images to. This should automatically be sync'ed back to your head node (if you're running remotely).
I am currently working on a small project using ECS and DirectX12 and wanted to get some advice on if there is a "preferred" way or alternative ways to solving my issue.
Let me give a very basic layout to an entity which can be rendered
<entity name = "Cube">
<component = Transform ..blah blah>
<component = Mesh = "cube.txt"> // holds data about verts/indicies etc.
<component = Materials = "Mat1"> // holds data about the materials etc.
</entity>
Approach 1
When the entities are loaded in, it would load each component. It gets to the mesh component and would now load in the mesh data and create the buffers needed and store them in the render system (by being able to get the render system from the world).
Say the next entity comes along and also wants the same mesh, it would check with the render system to see if it already exists and therefore would not need to load it in again but simply create a new copy for that entity.
Approach 2
Before loading in entities the render system would have a list of meshes to load in upfront (this means some list which would need updating each time a new mesh wants to be included into the system).
Now when the entities are loaded in they can just have a tag to match the tag on the mesh in the render system.
I am not really sure whats the best approach with this as I started working with approach 1 and find the mesh component handles the loading which does seem a simple and straightforward approach, however I have not done anything like this ECS approach before and wanted to get advice as I may have overlooked a far more preferred approach which is more effective.
The concern I am keeping in mind for work to be done later is the ability to handle batching objects using the same mesh types PSO's etc.
If you find that the first approach works for you right now, I'd say keep going with it until you need to definitively replace it with a system that can handle new functionality.
The second approach does sound like the "proper" way to do it, as in commercial game engines a list of assets is loaded in first on engine launch and most operations thereafter are carried out on the assumption of the mesh assets you need already existing (obviously not including things like procedural generation and so forth).
I'm new to symfony2 and doctrine.
here is the problem as I see it.
i cannot use :
$repository = $this->getDoctrine()->getRepository('entity');
$my_object = $repository->findOneBy($index);
on an object that is persisted, BUT NOT FLUSHED YET !!
i think getRepository read from DB, so it will not find a not-flushed object.
my question: how to read those objects that are persisted (i think they are somewhere in a "doctrine session") to re-use them before i do flush my entire batch ?
every profile has 256 physical plumes.
every profile has 1 plumeOptions record assigned to it.
In plumeOptions, I have a cartridgeplume which is a FK for PhysicalPlume.
every plume is identified by ID (auto-generated) and an INDEX (user-generated).
rule: I say profile 1 has physical_plume_index number 3 (=index) connected to it.
now, I want to copy a profile with all its related data to another profile.
new profile is created. New 256 plumes are created and copied from older profile.
i want to link the new profile to the new plume index 3.
check here: http://pastebin.com/WFa8vkt1
I think you might want to have a look at this function:
$entityManager->getUnitOfWork()->getScheduledEntityInsertions()
Gives you back a list of entity objects which are persisting yet.
Hmm, I didn't really read your question well, with the above you will retrieve a full list (as an array) but you cannot query it like with getRepository. I will try found something for u..
I think you might look at the problem from the wrong angle. Doctrine is your persistance layer and database access layer. It is the responsibility of your domain model to provide access to objects once they are in memory. So the problem boils down to how do you get a reference to an object without the persistance layer?
Where do you create the object you need to get hold of later? Can the method/service that create the object return a reference to the controller so it can propagate it to the other place you need it? Can you dispatch an event that you listen to elsewhere in your application to get hold of the object?
In my opinion, Doctrine should be used at the startup of the application (as early as possible), to initialize the domain model, and at the shutdown of the application, to persist any changes to the domain model during the request. To use a repository to get hold of objects in the middle of a request is, in my opinion, probably a code smell and you should look at how the application flow can be refactored to remove that need.
Your is a business logic problem effectively.
Querying down the Database a findby Query on Object that are not flushed yet, means heaving much more the DB layer querying object that you have already in your function scope.
Also Keep in mind a findOneBy will retrieve also other object previously saved with same features.
If you need to find only among those new created objects, you should make f.e. them in a Session Array Variable, and iterate them with the foreach.
If you need a mix of already saved items + some new items, you should threate the 2 parts separately, one with a foreach , other one with the repository query!