As far as I know there are 2 ways,
"Flip" the scene on the y axis and add some blending - Avoiding this since I've added deferred rendering, which doesn't support transparency without mixing with forward rendering.
Render the scene to a cube map and use the "reflect" function in GLSL - seems to be good for certain objects, but it's a plane I'm trying to do reflections with.
Any suggestions? I've considered rendering the scene to an orthographic texture.
Related
This question already has answers here:
OpenGL ES 2.0 - Fisheye shader displays a grey image
(1 answer)
Shader - Anti fisheye by "pulling" pixels
(2 answers)
Closed 1 year ago.
I am trying to create lomo fisheye effect on an image using openGL.
Should I use cube mapping and fisheye projection? Is there any open source that I can refer to?
You can draw a single quad with the image textured onto it, and use a fragment shader to warp the texture coordinate per-pixel as you desire. You'll have to do all the math yourself, but it looks like the previous post here might be a good starting point.
Taking the question title beyond effect on an image to producing "true" fisheye views, ie. usable field of view 180+ degrees...
There are two slightly different methods to adapt existing pipelines to fisheye view (with "simple" OpenGL). Both require scene rendering for up to 6 times - for each side of the "box" that will be projected to a flat screen. Each side / surface must be square - or should be, depends on the method - and likely smaller than the original full viewport.
The number of sides required depends on how wide the fisheye field of view is requested. In a typical FPS, for a FOV of 130 - three sides is enough. For a FOV up to 220 - five sides.
Method 1 - cubemap texture (GL_ARB_texture_cube_map)
init once, for a specific FOV, pre-calculate a translation table of 2d on-screen coordinates to cubemap texture 3d coordinates, 16x16 grid for the whole screen should be enough
setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_CUBE_MAP_ARB texture
iterate over screen emitting (rectangular) GL_QUAD_STRIP-s using the translation table and the cubemap.
Method 2 - 2d or rectangular textures (GL_NV_texture_rectangle)
init once, for a specific FOV, pre-calculate a "ray" table of texture + 2d texture coordinates to 2d screen coordinates
as in Method 1, setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_RECTANGLE_NV or GL_TEXTURE_2D textures
iterate over textures emitting (trapezoid) GL_QUAD_STRIP-s on the screen using the "ray" table.
The Method 1 is simpler and delivers better results.
Gotchas:
set cubemap texture wrapping to GL_CLAMP_TO_EDGE
in a typical FPS, player view is not only pitch and yaw, but also roll - calculate camera orientation for each side via proper rotation
if the render loop is combined with progress / physics / ai, this repeated scene re-rendering may confuse existing internals.
This is all of course depends on the specifics of a particular engine. I'm not sure how well this applies to OpenGL 3.3+ core profile yet the idea should be the same.
It is possible to draw the world to fisheye in one pass by doing fisheye transformaion in vertex shader. But! it requires original geometry sufficiently (pre-)tessellated. Or, it could be possible to employ Geometry Shader and/or Tessellation Shaders to organize tessellation / transform feedback on the GPU. The later should be built into the rendered from the ground up, perhaps.
For a well-isolated example using Quake 1 engine see Fisheye and Panorama OpenGL FPS and this diff specifically. Unfortunatelly, the vertex shader example is lost.
I'm trying to use instancing to do VR rendering in OpenGL with 1 draw call, 2 instances (one for left eye, one for right eye). The vertex shader then translates the vertices left for instanceID 0 and right for instanceID 1. The only thing I need more is per-instance viewport for automatic hardware culling/clipping. This is doable in directX but is it in OpenGL?
Recently I was actually implementing instanced stereo rendering for VR and had the same problem. I had the choice of using geometry shader for instanced viewports but I didn't want the overhead it'd introduce. So, in the end I ended up shifting the perspective for each view and using a clip plane.
So that's probably what you're looking for, a clip plane. It's really simple to implement in a vertex shader too, you just pass the 'x' coord into gl_ClipDistance.
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/gl_ClipDistance.xhtml
Good luck
I'm looking to capture a 360 degree - spherical panorama - photo of my scene. How can I do this best? If I have it right, I can't do this the ordinary way of setting the perspective to 360.
If I would need a vertex shader, is there one available?
This is actually a nontrivial thing to do.
In a naive approach a vertex shader that transforms the vertex positions not by matrix multiplication, but by feeding them through trigonometric functions may seem to do the trick. The problem is, that this will not make straight lines "curvy". You could use a tesselation shader to add sufficient geometry to compensate for this.
The most straightforward approach is two-fold. First you render your scene into a cubemap, i.e. render with a 90°×90° FOV into the 6 directions making up a cube. This allows you to use regular affine projections rendering the scene.
In a second step you use the generated cubemap to texture a screen filling grid, where the texture coordinates of each vertex are azimuth and elevation.
Another approach is to use tiled rendering with very small FOV and rotating the "camera", kind of like doing a panoramic picture without using a wide angle lens. As a matter of fact the cubemap based approach is tiled rendering, but its easier to get right than trying to do this directly with changed camera direction and viewport placement.
This question already has answers here:
OpenGL ES 2.0 - Fisheye shader displays a grey image
(1 answer)
Shader - Anti fisheye by "pulling" pixels
(2 answers)
Closed 1 year ago.
I am trying to create lomo fisheye effect on an image using openGL.
Should I use cube mapping and fisheye projection? Is there any open source that I can refer to?
You can draw a single quad with the image textured onto it, and use a fragment shader to warp the texture coordinate per-pixel as you desire. You'll have to do all the math yourself, but it looks like the previous post here might be a good starting point.
Taking the question title beyond effect on an image to producing "true" fisheye views, ie. usable field of view 180+ degrees...
There are two slightly different methods to adapt existing pipelines to fisheye view (with "simple" OpenGL). Both require scene rendering for up to 6 times - for each side of the "box" that will be projected to a flat screen. Each side / surface must be square - or should be, depends on the method - and likely smaller than the original full viewport.
The number of sides required depends on how wide the fisheye field of view is requested. In a typical FPS, for a FOV of 130 - three sides is enough. For a FOV up to 220 - five sides.
Method 1 - cubemap texture (GL_ARB_texture_cube_map)
init once, for a specific FOV, pre-calculate a translation table of 2d on-screen coordinates to cubemap texture 3d coordinates, 16x16 grid for the whole screen should be enough
setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_CUBE_MAP_ARB texture
iterate over screen emitting (rectangular) GL_QUAD_STRIP-s using the translation table and the cubemap.
Method 2 - 2d or rectangular textures (GL_NV_texture_rectangle)
init once, for a specific FOV, pre-calculate a "ray" table of texture + 2d texture coordinates to 2d screen coordinates
as in Method 1, setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_RECTANGLE_NV or GL_TEXTURE_2D textures
iterate over textures emitting (trapezoid) GL_QUAD_STRIP-s on the screen using the "ray" table.
The Method 1 is simpler and delivers better results.
Gotchas:
set cubemap texture wrapping to GL_CLAMP_TO_EDGE
in a typical FPS, player view is not only pitch and yaw, but also roll - calculate camera orientation for each side via proper rotation
if the render loop is combined with progress / physics / ai, this repeated scene re-rendering may confuse existing internals.
This is all of course depends on the specifics of a particular engine. I'm not sure how well this applies to OpenGL 3.3+ core profile yet the idea should be the same.
It is possible to draw the world to fisheye in one pass by doing fisheye transformaion in vertex shader. But! it requires original geometry sufficiently (pre-)tessellated. Or, it could be possible to employ Geometry Shader and/or Tessellation Shaders to organize tessellation / transform feedback on the GPU. The later should be built into the rendered from the ground up, perhaps.
For a well-isolated example using Quake 1 engine see Fisheye and Panorama OpenGL FPS and this diff specifically. Unfortunatelly, the vertex shader example is lost.
I'm learning about how to use JOGL and OpenGL to render texture-mapped quads. I have a test program and a test quad, and I figured out how to enable GL_BLEND so that I can specify the alpha value of a vertex to make a quad with a sort of gradient... but now I want this to show through to another textured quad at the same position.
Drawing two quads with the same vertex locations didn't work, it only renders the first quad. Is this possible then, or will I need to basically construct a custom texture on-the-fly based on what I want and then draw one quad with this texture? I was really hoping to take advantage of blending in this case...
Have a look at which glDepthFunc you're using, perhaps you're using GL_LESS/GL_GREATER and it could work if you're using GL_LEQUAL/GL_GEQUAL.
Its difficult to make out of the question what exactly you're trying to achieve but here's a try
For transparency to work correctly in OpenGL you need to draw the polygons from the furthest to the nearest to the camera. If you're scene is static this is definitely something you can do. But if it's rotating and moving then this is usually not feasible since you'll have to sort the polygons for each and every frame.
More on this can be found in this FAQ page:
http://www.opengl.org/resources/faq/technical/transparency.htm
For alpha blending, the renderer blends all colors behind the current transparent object (from the camera's point of view) at the time the transparent object is rendered. If the transparent object is rendered first, there is nothing behind it to blend with. If it's rendered second, it will have something to blend it with.
Try rendering your opaque quad first, then rendering your transparent quad second. Plus, make sure your opaque quad is slightly behind your transparent quad (relative to the camera) so you don't get z-buffer striping.