Generating depth map with panda3d - opengl

I need to generate test data for 3d reconstruction code. For this I decided to use panda3d. I am able to create simple app and see the scene. Now I need to create depth map for the scene, i.e. for each pixel on the screen I need to calculate depth, i.e. distance from camera to the closest object in the 3d space (moving perpendicularly to camera plane). What API functions are more suitable for that?

This is in principle similar to shadow mapping, as demonstrated in the advanced shadow sample. You will need to create an offscreen buffer and camera to render the depth buffer. Note that unless you use an orthographic lens, the resulting depth values will not be linear and will need to be transformed to a linear value using the near and far values of the lens. The near and far distances should be configured such as to get the desired range of depth values.
Alternatively, you can use a shader to write the appropriate distance values into the colour buffer, which is particularly useful if you want to store distance values of a perspective camera without having to undo the perspective projection later, or if you want to store the original world-space positions.
If you want to be able to access the values on the CPU, you will need to use the RTM_copy_ram value instead of RTM_bind_or_copy when binding your texture to tell Panda3D to transfer the results of rendering the buffer to CPU-accessible memory.

Related

How do I get started with a GPU voxelizer?

I've been reading various articles about how to write a GPU voxelizer. From my understanding the process goes like this:
Inspect the triangles individually and decide the axis that displays the triangle in the largest way. Call this the dominant axis.
Render the triangle on its dominant axis and sample the texels that come out.
Write that texel data onto a 3D texture and then do what you will with the data
Disregarding conservative rasterization, I have a lot of questions regarding this process.
I've gotten as far as rendering each triangle, choosing a dominant axis and orthogonally projecting it. What should the values of the orthogonal projection be? Should it be some value based around the size of the voxels or how large of an area the map should cover?
What am I supposed to do in the fragment shader? How do I write to my 3D texture such that it stores the voxel data? From my understanding, due to choosing the dominant axis we can't have more than a depth of 1 voxel for each fragment. However, since we projected orthogonally I don't see how that would reflect onto the 3D texture.
Finally, I am wondering on where to store the texture data. I know it's a bad idea to store data CPU side since you have to pass it all in to use it on the GPU, however the sourcecode I am kind of following chooses to store all its texture on the CPU side, such as those for a light map. My assumption is that data that will only be used on the GPU should be stored there and data used on both should be stored on the CPU side of things. So, from this I store my data on the CPU side. Is that correct?
My main sources have been: https://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-SparseVoxelization.pdf OpenGL Insights
https://github.com/otaku690/sparsevoxeloctree A SVO using a voxelizer. The issue is that the shader code is not in the github.
In my own implementation, the whole scene is positioned and scaled into one unit cube centered on world origin. The modelview-project matrices are straightforward then. And the viewport is simply the desired voxel resolution.
I use 2-pass approach to output those voxel fragments: the 1st pass calculate the number of output voxel fragments by accumulating a single variable using atomic counter. Then I use the info to allocate a linear buffer.
In the 2nd pass the rasterized voxel fragments are stored into the allocated linear buffer, using atomic counter to avoid write conflict.

Is my situation a good case to use GL_STATIC_DRAW?

I have a textured polygon mesh that I plan to be move-able based on the user's various inputs.
For example: the user can move the vertices in various directions. But the number of vertices and the texture coordinates will always be constant.
Is this a good situation to use GL_STATIC_DRAW, or should i use something else, like GL_STREAM_DRAW?
Instead of updating a VBO every time the vertices are moved, I would suggest using transformations. With transformations, you can create a matrix that can translate, rotate, or scale the vertices by simply multiplying the transformation matrix by the position vector. This multiplication can be done on the graphics card with a GLSL shader. Using this method, your vertex buffer would never have to change.
I would suggest reading this article for more information on how to use transformations in OpenGL: https://open.gl/transformations
No, your situation is not a good case to use GL_STATIC_DRAW. As h4lcOn's link suggests you should use dynamic or stream. Though if I understand correctly what you are trying to do I wouldn't even use VBO at all. There will not be much overhead (if any at all) if you push the coordinates every draw call for a simple polygon. Use a VBO in cases when you have a large quantity of polygons or when you make large amount of draw calls with the same vertex data in a single frame.

Here is a Volume Render result, how to interact with other 3D object

I've implemented the volume render using ray-casting in CUDA. Now I need to add other 3D objects (like 3D terrain in my case) in the scene and then make it interact with the volume-render result. For example, when I move the volume-render result overlapping the terrain, I wish to modulate the volume render result such as clipping the overlapping part in the volume render result.
However, the volume render result comes from a ray accumulating color, so it is a 2D picture with no depth. So how to implement the interaction makes me very confuse. Somebody can give me a hint?
First you render your 3D rasterized objects. Then you take the depth buffer and use it as an additional data source in the volume raycaster as additional constraint on the integration limits.
Actually, I think the result of ray-casting is a 2D image, it cannot interact with other 3D objects as the usual way. So my solution is to take the ray-casting 2D image as a texture and blend it in the 3D scene. If I can control the view position and direction, we can map the ray-casting result in the exact place in the 3D scene. I'm still trying to implement this solution, but I think this idea is all right!

Mapping from 2D projection back to 3D point cloud

I have a 3D model consisting of point vertices (XYZ) and eventually triangular faces.
Using OpenGL or camera-view-matrix-projection I can project the 3D model to a 2D plane, i.e. a view window or an image with m*n resolution.
The question is how can I determine the correspondence between a pixel from the 2D projection plan and its corresponding vertex (or face) from the original 3D model.
Namely,
What is the closest vertices in 3D model for a given pixel from 2D projection?
It sounds like picking in openGL or raytracing problem. Is there however any easy solution?
With the idea of ray tracing it is actually about finding the first vertex/face intersected with the ray from a view point. Can someone show me some tutorial or examples? I would like to find an algorithm independent from using OpenGL.
Hit testing in OpenGL usually is done without raytracing. Instead, as each primitive is rendered, a plane in the output is used to store the unique ID of the primitive. Hit testing is then as simple as reading the ID plane at the cursor location.
My (possibly-naive) thought would be to create an array of the vertices and then sort them by their distance (or distance-squared, for speed) once projected to your screen point. The first item in the list will be closest. It will be O(n) for n vertices, but no worse.
Edit: Better for speed and memory: simply loop through all vertices and keep track of the vertex whose projection is closest (distance squared) to your viewport pixel. This assumes that you are able to perform the projection yourself, without relying on OpenGL.
For example, in pseudo-code:
function findPointFromViewPortXY( pointOnViewport )
closestPoint = false
bestDistance = false
for (each point in points)
projectedXY = projectOntoViewport(point)
distanceSquared = distanceBetween(projectedXY, pointOnViewport)
if bestDistance==false or distanceSquared<bestDistance
closestPoint = point
bestDistance = distanceSquared
return closestPoint
In addition to Ben Voigt's answer:
If you do a separate pass over pickable objects, then you can set the viewport to contain only a single pixel that you will read.
You can also encode triangle ID by using geometry shader (gl_PrimitiveID).

How do you do nonlinear shading in OpenGL?

I am developing a visualization tool in OpenGL to visualize the output of a 3d finite element modeling application. The application uses a tetrahedral mesh (but I am only viewing the exterior facets, which are triangles). The output is a scalar variable, which I want to map to a color map (I already know how to do that). The tricky part is that the value of the variable in each cell is given by a polynomial function (I think it's of degree 3, but that hasn't been finalized yet) of the coordinates in that cell.
In OpenGL, if I use the "smooth" shading model, then if I create a polygon and give each vertex a different value, then it will automatically interpolate (linearly) between the values at the vertices in order to get the color values at the points in the interior. But that just gives a linear function in each cell, and I want it to be a nonlinear function that I specify. Is there a way to do this?
(Of course, one solution would be to interpolate "manually" by drawing each cell as a composite of much smaller OpenGL polygons that are small enough that the color doesn't change much in each of them. But I want to know if OpenGL itself has a solution.)
You could either use a pixel shader if you have experience in GLSL (or the time to learn it), or render your scalar values to a texture and texture-map your triangles with it.
If you use a shader, you should be able to read the color values from your triangle's vertices and perform the interpolation yourself as you see fit.
Edit
I found a paper dealing with that exact problem: http://mgarland.org/files/papers/perpixel.pdf