OpenGL: Saving depth map as 2d array - opengl

I am able to render depth maps of 3d models to the screen using openGL. I am trying to obtain a 2d array (or matrix) representation of the depth map, say as a grayscale image, so I can perform image processing operations on it, like masking and segmentation.
So far, my depth map simply prints depth values instead of the colors in the fragment shader. How can I save the resulting depth map display as a matrix?

You have to use frame buffer object. Attach texture to it as a depth attachment, and then use as normal texture. Have a look at this tutorial for example.

Related

Generating depth map with panda3d

I need to generate test data for 3d reconstruction code. For this I decided to use panda3d. I am able to create simple app and see the scene. Now I need to create depth map for the scene, i.e. for each pixel on the screen I need to calculate depth, i.e. distance from camera to the closest object in the 3d space (moving perpendicularly to camera plane). What API functions are more suitable for that?
This is in principle similar to shadow mapping, as demonstrated in the advanced shadow sample. You will need to create an offscreen buffer and camera to render the depth buffer. Note that unless you use an orthographic lens, the resulting depth values will not be linear and will need to be transformed to a linear value using the near and far values of the lens. The near and far distances should be configured such as to get the desired range of depth values.
Alternatively, you can use a shader to write the appropriate distance values into the colour buffer, which is particularly useful if you want to store distance values of a perspective camera without having to undo the perspective projection later, or if you want to store the original world-space positions.
If you want to be able to access the values on the CPU, you will need to use the RTM_copy_ram value instead of RTM_bind_or_copy when binding your texture to tell Panda3D to transfer the results of rendering the buffer to CPU-accessible memory.

How to extract OpenGL Polygon Rasterization?

I have a triangle mesh that I am rendering in OpenGL. I need to get a mapping from each pixel in the resulting view to the index of the polygon associated with it. Is there an easy way to do that?
If it is possible do that easily, what would be a reasonable datatype for storing it?
Draw into a framebuffer with an integer texture backing it. You can use gl_PrimitiveID as the value in the fragment shader to write as output. This will give you a map from pixel to the primitive index.

Data from Depth Sensor to OpenGL's z-buffer to achieve occlusion

I would like to learn more about occlusion in augmented reality apps using the data from a depth sensor(e.g kinect or Realsense RGB-D Dev kit).
I read that what one should do is to compare the z-buffer values of the rendered objects with the depth map values from the sensor and somehow mask the values so that only the pixels that are closer to the user will be seen.Does anyone have any resources or open source code that does this or could help me understand it?
What is more,I want my hand(which I detect as a blob) always to occlude the virtual objects.Isn't there an easier option to do this?
You can upload the depth data as a texture and bind it as the depth buffer for the render target.
This requires matching the near and far planes of the projection matrix with the min and max values of the depth sensor.
If the render target isn't the same size as the depth data then you can use sample it in the fragment shader and discard; when it would be occluded.

Per vertex mesh deformation

I am doing a project where i want to have i vertex buffer (in opengl) where I have vertices that make out a mesh of an image. Meaning that each pixel of the image consists of two triangles (a square pixel). I think I have achieved that by simple initializing a window with the size of the image and then having a vbo with a vertex grid also of the image size (grid of width and height).
For this image i also have a disparity/correspondence map(vector field) which a want to interpolate and use to deform this image mesh (deform the image/vertex grid). The idea comes from this article http://graphics.tu-bs.de/media/publications/stich08VTI.pdf (section 5) which is essentially what i want to do.
I want to have an image represented with a mesh and deform it by a vector field to have a new virtual view. How can this easily be done? I can't fully grasp how i am suppose to move the vertices in the vertex shader.
Firstly, it is pixel correspondence (in the vector field) but i can only move the vertices and one vertex belongs to two pixels so how do i deal with this (moving pixels)?
Secondly, is the phrase "per vertex mesh deformation" just moving the vertices in the vertex shader by some coordinates (in this case the vector field)?
And thirdly, if I manage to deform the mesh, how do I sample the original image correctly to get the "new view"? Do I just deform a set of UV coordinates by the same vector field as the image mesh and then sample the original image as a texture in the fragment shader?

OpenGL colorize filters

I have an open GL quad that is rendered with a grayscale gradient. I would like to colorize it by applying a filter, something like:
If color = 0,0,0 then set color to 255,255,255
If color = 0,0,1 then set color to 255,255,254
etc, or some scheme I decide on.
Note the reason I do this in grayscale because the algorithm I'm using was designed to be drawn in grayscale and then colorized since the colors may not be known immediately.
This would be similar to the java LookupOp http://download.oracle.com/javase/6/docs/api/java/awt/image/LookupOp.html.
Is there a way to do this in openGL?
thanks,
Jeff
You could interpret those colours from the grayscale gradient as 1-D texture coordinates and then specify your look-up table as a 1-D texture. This seems to fit your situation.
Alternatively, you can use a fragment program (shader) to perform arbitrary colour transformations on individual pixels.
Some more explanation: What is a texture? A texture, conceptually, is some kind of lookup function, with some additional logic on top.
A 2-D texture is something which for any pair of coordinates (s,t) or (x,y) in the range of [0,0] - [1,1] yields a specific colour (RGB, RGBA, L, whatever). Additionally it has some settings like warping or filtering.
Underneath, a texture is described by discrete data of a given "density" - perhaps 16x16, perhaps 256x512. The filtering process makes it possible to specify a colour for any real number between [0,0] and [1,1] (by mixing/interpolating neighbouring texels or just taking the nearest one).
A 1-D texture is identical, except that it maps just a single real value to a colour. Therefore, it can be thought of as a specific type of a "lookup table". You can consider it equivalent to a 2-D texture based on a 1xN image.
If you have a grayscale gradient, you may render it directly by treating the gradient value as a colour - or you can treat it as texture coordinates (= indices in the lookup table) and using the 1-D texture for an arbitrary colour space transform.
You'd just need to translate the gradient values (from 0..255 range) to the [0..1] range of texture indices. I'd recommend something like out = (in+0.5)/256.0. The 0.5 makes for the half-texel offset as we want to point to the middle of a texel (a value inside a texture), not to a corner between 2 values.
To only have the exact RGB values from lookup table (= 1-D texture), also set the texture filters to GL_NEAREST.
BTW: Note that if you already need another texture to draw the gradient, then it gets a bit more complicated, because you'd want to treat the values received from one texture as coordinates for another texture - and I believe you'd need pixel shaders for that. Not that shaders are complicated or anything... they are extremely handy when you learn the basics.