DXR: How to identify the geometry instance of the bottom level AS inside the closest hit shader - hlsl

I have multiple geometries (D3D12_RAYTRACING_GEOMETRY_DESC) inside of a single DXR bottom level acceleration structure (BLAS). How can I determine which of those was hit inside of a closest hit shader?
The following HLSL intrinsics do something different:
PrimitiveIndex() returns the triangle index for the current geometry, but it restarts for each new geometry inside of the BLAS, so I don't know which one was hit.
InstanceIndex() returns the index of the top level but not of the bottom level
InstanceID() again, is only defined for the top level

Starting with D3D12_RAYTRACING_TIER_1_1 there is a new intrinsic called uint GeometryIndex() Specification.

I am wondering about that as well. Unfortunately, I can't give you a definitive answer but on this page I found the following statement:
Since hit groups need information about the geometry that was hit –
its vertex data and material properties – you typically need a local
root table for them. To avoid passing data through the local root
table, you may also use the instance id field in the top-level
instance descriptor and utilize the instance index, which are
implicitly available in the hit group shaders. Remember, though, that
these values are shared between all geometries in the instance when
the bottom level structure contains more than one geometry. To have
unique data for each geometry, a local root table must be used.
So if I understood that correctly, you either have to use a local root table or you have to restrict yourself to only one geometry per bottom level structure.
There is a MultiplierForGeometryContributionToShaderIndex parameter in TraceRay which you can set to 1 to get a different hit group per geometry. If you store a list of materials per hit group, you probably need only one hit group per geometry.
See also RaytracingMiniEngineSample.

Related

Parsing OBJ file having multiple objects

On reading through the OBJ file format, I realized that there are several other attributes like Sub-objects (-o), Groups (-g) etc.
I am having trouble understanding how to parse these other parameters (if necessary). Should I load the vertices and faces data of every (-o) into separate VAOs?
Also, in that case, how will the rotation of the whole object work? Will all Sub-objects start rotating about their own centers?
How to make the multiple sub-objects rotate in unison like a single object and still have independent control over each whenever necessary?
Should I load the vertices and faces data of every (-o) into separate VAOs?
Basically yes. Each object (-o) or smoothing group (-g or -s) is essentially a separate part of the overall model. So you should be creating a separate renderable mesh for each group.
Generally the vertex data (vertices, normals, texture coordinates) is not shared between models, but there's nothing in the OBJ format to say they couldn't be AFAIK.
How to make the multiple sub-objects rotate in unison like a single object and still have independent control over each whenever necessary?
This is more of a design question. Assuming you have some sort of tree-like scene-graph then you would create a root node for the entire OBJ model and attach separate nodes for each group. To rotate the whole model you apply a transform to the root node. As far as I know the OBJ format does not specify transformations, rotations, etc.

Create points (gl_Points) and show sequentially in modern OpenGL - PyOpenGL

I have been using OpenGL for a while now and continue to stay positive about making progress. However, I now have an issue that I have been unable to solve and it's taking a while. So, the issue is that I would like to:
Create points on screen sequentially (to appear every second for example)
Move these points independently
So far I have 2 methods on paper and that is to upload all vertices to a VBO and make each point visible (draw). The other method I had in mind was to create an empty VBO (set to NULL) and upload data per point.
Note, I want to transform these points independent of each other - can a uniform still be used? If so how can I set this up to draw point - transform - draw point - transform.
If I'm going about this completely wrong or there is a better, more improved method then please say so.
Many thanks!

VTK abstract picker for multiple actors of different opacity value

I am new to VTK. I would like to know how VTK abstract picker behaves for multiple actors of different opacity values. Let's consider two actors, with one
inside another. When I set the opacity of the outer surface to 0.3, while
keeping the opacity of the inner one 1.0.Since the outer one is semi-transparent, I can see the inner actor in the overlap region of the two actors. When I perform picking in that region,the resulting coordinates is from the inner surface itself, and when I pick some point other than the overlap region,I am getting outer surface coordinates. How can I perform picking operation based on opacity values? Such that i want to pick one actor at a time. Anybody please help..
vtkAbstractPicker is, as the name suggest, just as an abstract class that defines interface for picking, but nothing more. When choosing actual pickers, you basically have a choice between picking based on ray casting or "color picking" using graphical hardware (see the linked documentation for actual vtk classes that implement those).
Now to the actual problem, if I understood what you wrote correctly, you are facing a rather simple sorting problem. The opacity can be seen as kind of a priority - the actors with higher opacity should be picked even if they are inside others with lower opacity, right? Then all you need to do is get all the actors that are underneath your mouse cursor and then choose the one with highest opacity, or the closest one for cases when they have the same opacity.
I think the easiest way to implement this is using the vtkPropPicker (vtkProp is a parent class for actor so this is a good picker for picking actors). It is one of the "hardware" pickers, using the color picking algorithm. The basic algorithm is that each pickable object is rendered using a different color into a hidden buffer (texture). This color (which after all is a 32-bit number like any other) serves as an ID of that object: when the user clicks on a screen, you read the color of the pixel from the picking texture on the clicked coordinates and then you simply look into the map of objects under the ID that is equal to that color and you have the object. Obviously, it cannot use any transparency - the individual colors are IDs of the objects, blending them would make them impossible to identify.
However, the vtkPropPicker provides a method:
// Perform a pick from the user-provided list of vtkProps
// and not from the list of vtkProps that the render maintains.
// If something is picked, a 1 is returned, otherwise 0 is returned.
// Use the GetViewProp() method to get the instance of vtkProp that was picked.
int PickProp (double selectionX, double selectionY,
vtkRenderer *renderer, vtkPropCollection *pickfrom);
What you can do with this is simply first call PickProp(mouseClickX, mouseClickY, renderer of your render window, pickfrom) providing only the highest-priority actors in the pickfrom collection i.e. the actors with the highest opacity. Underneath, this will do a render of all the provided actors using the color-coding algorithm and tell you which actor is underneath the specified coordinates. If it picks something (return value is 1, you call GetViewProp on it and it gives you the pointer to your picked actor), you keep it, if it does not (return value is 0), you call it again, this time providing actors with lower opacity and so on until you pick something or you test all the actors.
You can do the same with the ray-casting pickers like vtkPicker as well - it cast a ray underneath your mouse and gives you all intersections with everything in the scene. But the vtkPicker's API is optimized for finding the closest intersection, it might be a bit of work to get all of them and then sorting them and in the end, I believe the solution using vtkPropPicker will be faster anyway.
If this solution is good, you might want to look at vtkHardwareSelector, which uses the same algorithm, but unlike the vtkPropPicker allows you to access the underlying picking texture many times, so that you don't need to re-render for every picking query. Depending on how your rendering pipeline is set up, this might be a more efficient solution (= if you make a lot of picking without updating the scene).

An efficient way to draw many OpenGL points in individual Begin-End blocks?

If you begin to render points, render a ton of vertices, and then end, you get noticeably better performance than if you begin points, render a vertex, end, and repeat a ton of times (e.g., redraws during pan and zoom actions for, say, 200,000 points are MUCH smoother).
I guess this might make sense, but it's disappointing. Is there a way to get back the performance while still rendering each point in its own begin-end block?
BACKGROUND:
I wanted to design a control that could contain a ton (upwards of a million in an extreme case) of "objects" that each do its own rendering. Many of these objects will represent themselves as points.
If I let a hundred-thousand points individually render themselves in their own begin-end blocks, I get a major performance hit (as opposed to rendering them all in a single begin-end block). It thus seems I might have to make the container aware of the way the objects render themselves (for example, beginning points, telling everything that needs to render a point to do so, and then ending).
This messes up the independent nature of the display-object relationship I wanted. It also messes up hit testing by selection because I don't think you can add a name to a vertex inside a begin-end block of points, right?
FYI (in case this helps) my project will be displaying a 2D scene (using an ortho projection) and requires hit testing to determine which related object a user might click. In general, the objects will represent "tracks" containing individual points connected with lines. The position data is generally static, but point and track colors and display representations may change due to user settings and selection information. One exception--a "playback" mode may allow the user to see only one track point at a time (the "current" point in the playback) and step through time from one point to the next. However, even in that case I assumed I would simply change which point on each track is actually displayed (at its "static" location) depending on the current time in the playback. If any of that brings to mind further suggestions for an OpenGL newbie, then much thanks!
To solve this issue, I started by using VBOs (which did speed things up). I then allowed my "track" objects to each draw their own set of points as well as the lines connecting the points (each track using two DrawArrays: one for the line strip and one for the points). Each point does not have to draw itself independent of the other points--this was the major performance improvement.
BUT, I still needed hit-testing against the points, so..
Finally, I needed allowed each displayed object (in this case, the tracks) to do its own selection routine so each object can do what it needs for efficient selection. For tracks, they took a two-step process. First, a track names its entire line strip with one name (0) and performs the select. IF that results in a hit, then the track does a second render pass, naming each individual point and line segment to hit-test against each part of the track. This makes hit-testing against each point quite fast.
As an aside, I'm using .Net (C#) for my programming. With it, I created a class (SelectEventArgs) derived from EventArgs to describe selection criteria to objects being displayed. My SelectEventArgs class includes a list meant to be filled with selected objects. The display class then has an EventHandler<SelectEventArgs> event. When an object is added to a display, it subscribes to that event. When the event fires, each object determines whether it's been selected and fills the list of selected objects (in the SelectEventArgs passed in the event) with its information. After the event fires, I can access the list of objects returned in the SelectEventArgs instance to handle the user interaction. I find this to be a nice pattern for creating flexible display objects and selection tools.

OpenGL: When and how often am I supposed to call glLineWidth, glPointSize, glLineStyle?

As far as I know it is common practice to call glColor4f or the like each time before drawing an object or primitive.
But what about point and line style properties?
Is it normal to call glLineSize and glPointSize very often?
Should I store a backup of the current point size and set it back after drawing, or simply call glPointSize before drawing any point, even ones which use the default size?
Unless you are drawing tens to hundreds of thousands of lines, it really won't matter. And even then, you should profile and verify that this actually matters to performance. But let's assume you did that.
Minimizing the number of state changes could improve your performance. This means that you should sort your lines by line size and your points by point size. That way, lines that are all the same size can be drawn at the same time. This of course assumes that you could draw the lines in any order. If you need the lines to be drawn in a certain order, then you will have to live with the state changes.
Avoid glGet** functions to determine current line width / point size. It is a big performance eater.
Instead store current property localy and update when necessary (preferred), or use glPushAttrib(GL_LINE_BIT) / glPopAttrib.
OpenGL is a state machine, so the only important rule is, that you set state when you need it and whenever it changes. You need a certain line width? Set it and be done width. You need a number of different line widths in a single code segment: Sort by line width and set once for every line width.
Some states are expensive to switch, so it's a good idea to keep track of those; in particular the states in question are anything related to texture binding (either to a texture unit or as FBO attachment) and shaders. Everything else is actually quite cheap to change.
In general it's a good idea to set OpenGL state explicitly and don't assume certain states being preset from earlier. This also covers the transformation matrices and setup: Do a full viewport and projection setup at every beginning of the display function; advanced applications will have to change those multiple times drawing a single frame anyway (so no glViewport, glMatrixMode(GL_PROJECTION), ... in a reshape handler).