Displaying depth buffer as image on openGL [duplicate] - opengl

This question already has answers here:
OpenGL - How to access depth buffer values? - Or: gl_FragCoord.z vs. Rendering depth to texture
(2 answers)
Closed 5 years ago.
I am using freeglut, and have a simple 3-D model (it is just a cube) that I can display on my screen. I am trying to obtain the depth map of this cube given the current coordinates and orientation of the camera and display the depth map on some window.
Is this possible? If so, how?

You can create Frame Buffer Object FBO and attach only depth texture to it. Now while rendering just bind this FBO and render. The depth texture attached toFBO will have depth values. Now Bind this texture and use function like glReadPixels and get data of attached texture in some buffer. Then using some library like SOIL you can save it to format you want like ong or jpeg.

Related

OpenGL: Saving depth map as 2d array

I am able to render depth maps of 3d models to the screen using openGL. I am trying to obtain a 2d array (or matrix) representation of the depth map, say as a grayscale image, so I can perform image processing operations on it, like masking and segmentation.
So far, my depth map simply prints depth values instead of the colors in the fragment shader. How can I save the resulting depth map display as a matrix?
You have to use frame buffer object. Attach texture to it as a depth attachment, and then use as normal texture. Have a look at this tutorial for example.

OpenGL / GLSL Terrain Blending Textures Solution

I`m trying to get a map editor to work. My idea was to create a texture array for blending multiple terrain textures. One single texture channel (r for example) is bound to a terrains texture alpha.
The question is: Is it possible to create kinda Buffer that can be read like a texture sampler and store as many channels as i need ?
For example :
texture2D(buffer, uv)[0].rgb
Is this too far-fetched ?
This would be faster than create 7 textures and send them to the glsl shader.
You can use a texture array and access the individual textures using texture2D with 3rd coordinate specifying the layer.

How to draw a 3d rendered Image (perspective proj) back to another viewport with orthogonal proj. simultaniously using multiple Viewports and OpenGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
My problem is that i want to take a kind of snapshot of a 3d scene manipulate that snapshot and draw it back to another viewport of the scene,
I just read the image using the glReadPixel method.
Now I want to draw back that image to a specified viewport but with the usage of modern OpenGL.
I read about FrameBufferObject (fbo) and PixelBufferObject (pbo) and the solution to write back the FrameBufferObject contents into a gl2DTexture and pass it to the FragementShader as simple texture.
Is this way correct or can anyone provide a simple example of how to render the image back to the scene using modern OpenGL and not the deprecated glDrawPixel method?
The overall process you want to do will look something like this
Create an FBO with a color and depth attachment. Bind it.
Render your scene
Copy the contents out of its color attachment to client memory to do the operations you want on it.*
Copy the image back into an OpenGL texture (may as well keep the same one).
Bind the default framebuffer (0)
Render a full screen quad using your image as a texture map. (Possibly using a different shader or switching shader functionality).
Possible questions you may have:
Do I have to render a full screen quad? Yup. You can't bypass the vertex shader. So somewhere just go make four vertices with texture coordinates in a VBO, yada yada.
My vertex shader deals with projecting things, how do I deal with that quad? You can create a subroutine that toggles how you deal with vertices in your vertex shader. One can be for regular 3D rendering (ie transforming from model space into world/view/screen space) and one can just be a pass through that sends along your vertices unmodified. You'll just want your vertices at the four corners of the square on (-1,-1) to (1,1). Send those along to your fragment shader and it'll do what you want. You can optionally just set all your matrices to identity if you don't feel like using subroutines.
*If you can find a way do your texture operations in a shader, I'd highly recommend it. GPUs are quite literally built for this.

OpenGL - how to render object to 3D texture as a volumetric billboard

I'm trying to implement volumetric billboards in OpenGL 3.3+ as described here
and video here.
The problem I'm facing now (quite basic) is: how do I render a 3D object to a 3D texture (as described in the paper) efficiently? Assuming the object could be stored in a 256x256x128 tex creating 256*256*128*2 framebuffers (because it's said that it should be rendered twice at each axis: +X,-X,+Y,-Y,+Z,-Z) would be insane and there are too few texture units to process that many textures as far as I know (not to mention the amount of time needed).
Does anyone have any idea how to deal with something like that?
A slice of 3D texture can be directly attached to the current framebuffer. So, create a frame buffer, a 3D texture and then do rendering like:
glFramebufferTexture3D( GL_FRAMEBUFFER, Attachment, GL_TEXTURE_3D,
TextureID, 0, ZSlice );
...render to the slice of 3D texture...
So, you need only 1 framebuffer that will be iterated by the number of Z-slices in your target 3D texture.

Get the last frame color from GLSL [duplicate]

This question already has answers here:
Is there a way in Opengl es 2.0 fragment shader, to get a previous fragment color
(2 answers)
Closed 12 months ago.
I want to processing a texture in the Fragment Shader. However, current frame should base on last frame information, such as neighbor positions. So I need write current frame into one place/buffur/object and read it in next loop.
Can someone give me a direction about my requirement?
Use Frame Buffer Objects. Create two FBOs into which you render alternatingly, each time binding the other one as texture for sourcing the data.