Scenekit: How to get all of node's materials? - swift3

Ok, from what I understand materials can be created for .dae or any 3D model being used as an SCNNode in Xcode here, in the model's editor:
The topmost material gets applied automatically and all is well. My problem is that I want to programmatically SWITCH between these materials that have been created throughout my game.
I tried to get an array of these materials by doing:
node.geometry?.materials
however this only returns that first material. Ive tried everybting but cant find a way to get the other materials and switch to them. Right now I am trying:
childNode.geometry?.materials = [(childNode.geometry?.material(named: "test"))!]
//childNode is the node
where test was that second material but it finds that as nil. How can I programmatically switch between multiple materials?

If the material is not actually assigned to one of the material slots (such as diffuse) it’s not part of the geometry either.
You could assign the second material to another slot and then reset its color in code after you read the material into a property to be used later.
Another option I used myself is assigning multiple materials to different faces of the same model (in third party 3d software). Exported as dae, added to xcode which then automatically divided the geometry into separate elements with each their own material. Which I can then adjust in xcode, and itterate in the same way you are trying to do.

Related

Vertex buffer not clearing properly

Context
I'm a beginner in 3D graphics and I'm starting out with Vulkan, which I already know it's not recommended save it please, currently working on a university project to develop the base of a 3D computer graphics engine based on the Vulkan API.
The problem
Example of running the app to render the classic 2D triangle
Drawing a 3D mesh after having drawn the triangle
So as you can see in the images above I want to be able to:
Run the engine.
Choose an object to be drawn.
Close the window.
Choose another object to be drawn.
Open the same window back up with only the last object chosen visible.
And the way I have been doing this is by essentially cleaning up the whole swap chain and recreating it from scratch once the window is closed and a new object has been chosen. Now I'm aware this probably sounds like terrorism for any computer graphics engineer but the reason I'm doing this is because I don't know a better way, I have just finished the vulkan tutorial.
Solutions tried
I have checked that I do a vkDestroyBuffer and vkFreeMemory on the current vertex buffer before recreating it again once I choose a different object.
I have disabled depth testing entirely in case it had something to do with it, it doesn't.
Note: The code is extensive and I really don't have a clue of which part of it could be relevant to the problem, so I opted for not cluttering the question, if there is an specific part you think it might help you find the solution please request it.
Thank you for taking the time to read my question.
A comment by user369070 ended up drawing my attention to the function I use to read OBJ files which made me realize that this function wasn't cleaning a data structure I use to store the vertices of the object chosen to be drawn before passing them to the vertex buffer.
I just had to add vertices = {}; at the top of the function to solve it.

Find out which triangles were drawn OpenGL

I have an idea and I want to know if this would be possible in any way. I want to render a scene and use the resulting image to find out which triangles were or are visible from my current point of view.
Let me give you an example: I would render the scene into a custom framebuffer and store a certain ID to every pixel, the ID would be an identifier to the original primitive. Now my problem is that I don't know how to find out which pixel belonged to which triangle. My first idea was to just pass an ID along the shader stages, but I don't know if that would be possible. If I can find out which primitives were drawn, I could cull the others. Is there any way to find out which pixel belonged to which (original) triangle?
There is a similar question here on Stackoverflow, but it does not really answer my question (see question).
Why do I want to do this?
I have a server-client scenario where my server is very powerful whereas my client is not. The server sends the model data to the client and the client renders it locally. To reduce the rendering time and the amount of memory needed, I want to do precalculations on the server and only send certain parts of the model to the client.
Edit: Changed my question because I misunderstood some concepts.

Highlight specific parts of a mesh c++ OpenGL

I have imported a mesh object (.obj file from blender) into openGl window (glfw) context. I am following various tutorials on 3D picking to allow me to select it. What I cannot get my head around is, how to allow sub-portions of the mesh to get highlighted when clicked at one point. For example, a car mesh in which if you click over the door, the entire door gets highlighted. Without going into game engines, because my intention is to apply this concept to 3d diagrams in an app, what is the most straightforward way to implement this.
PS -- Before someone downvotes this, I have spent hours on google trying to search for an answer so apologies if this is off-topic / unsuitable.
the mesh has some colour information in form of vertex colours or textures. To highlight part of the mesh, you need to change the colour information from vertex arrays or textures that are used. This can be expensive cpu operation to generate the required arrays and textures, but after the data is generated, blitting it to screen takes no time. The main complexity is in modifying the data structures of the mesh.

Advice on setting up a Qt3D scene with redundant objects

I'm new to the Qt3D module and am currently writing a game in Qt5/C++ using Qt3D. This question is about "Am I on the correct path?" or "Can you give me some advice on...".
The scene of the game has a static part (the "world") and some objects (buildings and movable units). Some of the buildings might be animated in the future, but most of them are very static (but of course destructible).
I divide the quesion into two parts: How to handle copies of the same model placed at different positions in the scene and how to manage the scene as a whole in the viewer class.
Redundant objects in the scene:
Of course the objects share the same library of buildings / movable units, so it would be dumb to upload the models for these objects to the graphics card for every instance of such a unit. I read through the documentation of QGLSceneNode, from which I guess that it is designed to share the same QGeometryData among multiple scene nodes, but apply different transformations in order to place the objects at different positions in my scene. Sharing the same QGLSceneNode for all instances of a building would be the wrong way, I guess.
I currently have a unit "library class" telling me the properties of each type of building / movable unit, among other things the geometry including textures. Now, I'd provide a QGeometryData for each building in this library class, which is uploaded on the loading procedure of the game (if I decide to do this for all buildings at startup...).
When creating a new instance of a unit, I'd now create a new QGLSceneNode, request the QGeometryData (which is explicitly shared) from the library and set it on the node. Then I set the transformation for this new node and put it in my scene. This leads us to the second part of my question:
Manage the scene as a whole:
My "scene" currently is neither a QGLSceneNode nor a QGLAbstractScene, but a struct of some QGLSceneNodes, one for each object (or collection of objects) in the scene. I see three approaches:
My current approach, but I guess it's "the wrong way".
The composition: Putting everything as child nodes in one root QGLSceneNode. This seemed the correct way for me, until I realized that it is very difficult to access specific nodes in such a composition. But when would I even need to access such "specific" nodes? Most operations require to take all nodes into account (rendering them, updating positions for animations), or even operate on a signal-slot-basis so I even don't need to find the nodes manually at all. For example, animations can be done using QPropertyAnimations. Acting on events can also be done by connecting a QObject in the game engine core (all buildings are QObjects in the engine's core part) with the corresponding QGLSceneNode.
But this approach has another downside: During rendering, I might need to change some properties of the QGLPainter. I'm not sure which properties I need to change, this is because I don't know Qt3D enough and can't guess what can be done without changing the properties (for example: using a specific shader to render a specific scene node).
Then I found QGLAbstractScene, but I can't see the advantages when comparing with the two solutions above, since I can't define the rendering process in the scene. But maybe it's not the correct location where to define it?
Which is the best approach to manage such a scene in Qt3D?
With "best" I mean: What am I going to do wrong? What can I do better? What other things should I take into account? Have I overlooked anything important in the Qt3D library?

List of verticies from OpenGL program to something importable

I'm working on making a new visualization of the type of binary stars I study, and I'm starting from an existing code that renders a nice view of them given some sensible physical parameters.
I would like a bit more freedom on the animation side of things, however, and my first thought was to output the models made by the program in a format that could be read in by something else (Blender?) I've read up on the (Wavefront?) .OBJ format, and while it seems straightforward, I can't seem to get it right; importing fails silently, and I suspect it's because I'm not understanding how the objects are actually stored.
The program I'm starting from is a C++ project called BinSim, and it already has a flag to output vertices to a log file for all the objects created. It seems pretty simple, just a list of indices, x, y, z, and R, G, B (sometimes A) values. An example output format I've been working with can be found here; Each object is divided up into a latitude/longitude grid of points, and this is a small snippet (full file is upwards of 180 MB for all the objects created).
I've been able to see that the objects are defined as triangle strips, but I'm confused enough by all of this that I can't see the clear path towards making this list of vertices into an .OBJ (or whatever) format. Sorry if this really belongs in another area (GameDev?), and thanks!
OpenGL is not a scene management system. It's a drawing API and starting off OpenGL data structures for model storage is tedious. As already said, OpenGL draws things. There are several drawing primitives, the triangle strip being one of them. You start with two vertices (forming a line) and each next incoming vertex extends the line of the last two specified vertices to a triangle. The Wavefront OBJ format doesn't know triangle strips, you'd have to break them down into individual triangles, emulating the way OpenGL does it.
Also don't forget that Blender is easily extensible using Python scripting and you can just write a import script for whatever data you already have without going through the hassle of using some ill suited format.