Is it possible to project the camera depth onto a plane ?
Let me explain, if I simply transfer the depth buffer on a plane, it will always display the depth from the camera point of view. But how can I proceed to display the depth but from the plane point of view.
I want to apply the effect on a shader. For me it's maybe a matrix problematic but I don't get it.
The depth buffer is relative to the representation, it is possible to project it to any plane using (for example) shaders. But..
The depth buffer is not a full geometrical representation of the object, but only of the "being seen" surface from the camara POV.
If you project the depth buffer, part of the object will probably not be projected (see image).
In the picture, the camara (the red eye) is looking to an object (black). The depth buffer represent the distance between the camara and the red surface.
For the plane (blue line) you probably want to get the entire object projection (blue surface), but projecting the red surface on the plane, you will only get a little portion of the entire blue surface.
If you want the entire blue surface,
Change the POV of the camara to just behind the plane.
Render the scene.
Get your depth buffer and save it into a texture/image/buffer (P).
Reset your camara POV
Render the scene with using for your shader the image (P)
Related
My scene background is a procedural texture that draws an ocean, or a lava floor, or some such other background. It extends completely under as well, as if you were inside a cubemap. It would be easier if I could assume the view was the same in all directions, but if there's a sun, for example, you cannot.
Now if I wanted to put a chrome sphere in the middle, what does it reflect? Does the sphere see the same thing as the main camera does?
Assume it's expensive to render the background, and I do not want to do it multiple times per frame. I can save a copy to use in the reflection if that helps.
Can someone suggest a general approach? Here's an example of the procedural texture I mean (this is all in the shader, no geometry other than a quad):
https://www.shadertoy.com/view/XtS3DD
To answer your first question: In the real world, the reflection you see in the sphere depends on both the position of the camera, and the position of the sphere itself. However, taking both positions into account is prohibitively expensive for a moving sphere when using cube mapping (the most common approach), since you have to re-render all six faces of the cubemap with each frame. Thus, most games "fake" reality by using a cubemap that is centered about the origin ((0, 0, 0) in world-space) and only rendering static objects (trees, etc.) into the cube map.
Since your background is entirely procedural, you can skip creating cubemap textures. If you can define your procedural background texture as function of direction (not position!) from the origin, then you can use normal vector of each point on the sphere, plus the sphere's position, plus the camera position to sample from your background texture.
Here's the formula for it, using some glsl pseudocode:
vec3 N = normal vector for point on sphere
vec3 V = position of camera
vec3 S = position of point on sphere
vec3 ray = normalize(reflect(V-S,N));
// Reflect the vector pointing from the a point on the sphere to
// the camera over the normal vector for the sphere.
vec4 color = proceduralBackgroundFunc(ray);
Above, color is the final output of the shader for point S on the sphere's surface.
Alternatively, you can prerender the background into a cube texture, and then sample from it like so (changing only the last line of code from above):
vec4 color = texture(cubeSample,ray);
I am working on voxelisation using the rendering pipeline and now I successfully voxelise the scene using vertex+geometry+fragment shaders. Now my voxels are stored in a 3D texture which has size, for example, 128x128x128.
My original model of the scene is centered at (0,0,0) and it extends in both positive and negative axis. The texure, however, is centered at (63,63,63) in tex coordinates.
I implemented a simple ray marcing for visualizing but it doesn't take into account the camera movements (I can render only from very fixed positions because my ray have to be generated taking into account the different coordinates of the 3D texture).
My question is: how can I map my rays so that they are generated at point Po with direction D in the coordinates of my 3D model but intersect the voxels at the corresponding position in texture coordinates and every movement of the camera in the 3D world is remapped in the voxel coordinates?
Right now I generate the rays in this way:
create a quad in front of the camera at position (63,63,-20)
cast rays in direction towards (63,63,3)
I think you should store your entire view transform matrix in your shader uniform params. Then for each shader execution you can use its screen coords and view transform to compute the view ray direction for your particular pixel.
Having the ray direction and camera position you just use them the same as currently.
There's also another way to do this that you can try:
Let's say you have a cube (0,0,0)->(1,1,1) and for each corner you assign a color based on its position, like (1,0,0) is red, etc.
Now for every frame you draw your cube front faces to the texture, and cube back faces to the second texture.
In the final rendering you can use both textures to get enter and exit 3D vectors, already in the texture space, which makes your final shader much simpler.
You can read better descriptions here:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
http://web.cse.ohio-state.edu/~tong/vr/
I want to create skybox, which is just textured cube around camera. But actually i don't understand how this can work, because the camera viewing volume is frustum and the skybox is cube. According to this source:
http://www.songho.ca/opengl/gl_projectionmatrix.html
Note that the frustum culling (clipping) is performed in the clip
coordinates, just before dividing by wc. The clip coordinates, xc, yc
and zc are tested by comparing with wc. If any clip coordinate is less
than -wc, or greater than wc, then the vertex will be discarded.
vertexes of skybox faces should be clipped, if they are outside of frustum.
So it looks for me that the cube should be actually a frustum and should match exactly the gl frustum faces, so my whole scene is wrapped inside of that skybox, but i am sure this is bad. Is there any way how to fill whole screen with something, that wraps whole gl frustum?
The formulation from your link is rather bad. It is not actually vertices that get clipped, but rather fragments. Drawing a primitive with vertices completely off-screen does not prevent the fragments that would intersect with the screen from getting drawn. (The picture in the link also actually shows this being the case.)
That having been said, however, it may (or may not, depending on the design of your rendering code) be easier to simply draw a full-screen quad, and use the inverse of the projection matrix to calculate the texture coordinates instead.
What i'd like to do:
I have a 3d transformed, uvmapped object with a white texture as well as a screenspace image.
I want to bake the screenspace image into the texture of the object, such that it's 3d transformed representation on screen exactly matches the screenspace image (so i want to project it onto the uv space).
I'd like to do this with image_load_and store. I imagine it as:
1st pass: render the transformed 3d objects uvcoordinates into a offscreen texture
2nd pass: render screensized quad, on each pixel, check the value of the texture rendered in the first pass, if there are valid texturecoordinates there, look up the screenspace image with the screenspace quad's own uv textures and write this texel color with image_load_and_store into a texturebuffer by using the uv textures read from the input texture as index.
As I never worked with this feature before, I'd just like to ask whether someone who worked with it already considers this feasible and whether there maybe are already some examples that do something in this direction?
Your proposed way is certainly one method to do it, and actually it's quite common. The other way is to to a back projection from screen space to texture space. It's not that hard as it might sound at first. Basically for each triangle you have to find the transformation of the tangent space vectors (UV) on the models surface to their screen counterparts. In addition to that transform the triangle itself to find the boundaries of the screen space triangle in the picture. Then you invert that projection.
I would like to render the 3D volume data: Density(can be mapped to Alpha channel), Temperature(can be mapped to RGB).
Currently I am simulationg maximum intensity projection, eg: rendering the most dense/opaque pixel in the end.But this method looses the depth perception.
I would like to imitate the effect like a fire inside the smoke.
So my question is what is the techniques in OpenGL to generate images based on available data?
Any idea is welcome.
Thanks Arman.
I would try a volume ray caster first.
You can google "Volume Visualization With Ray Casting" and that should give you most of what you need. NVidia has a great sample (using openg) of ray casting through a 3D texture.
On your specific implementation, you would just need to keep stepping through the volume accumlating the temperature until you reach the wanted density.
If your volume doesn't fit in video memory, you can do the ray casting in pieces and then do a composition step.
A quick description of ray casting:
CPU:
1) Render a six sided cube in world space as the drawing primitive make sure to use depth culling.
Vertex shader:
2) In the vertex shader store off the world position of the vertices (this will interpolate per fragmet)
Fragment shader:
3) Use the interpolated position minus the camera position to get the vector of traversal through the volume.
4) Use a while loop to step through the volume from the point on the cube through the other side. 3 ways to know when to end.
A) at each step test if the point is still in the cube.
B) do a ray intersection with cube and calculate the distance between the intersections.
C) do a prerender of the cube with forward face culling and store the depths into a second texture map then just sampe at the screen pixel to get the distance.
5) accumulate while you loop and set the pixel color.