How to create fisheye lens effect by openGL? [duplicate] - opengl

This question already has answers here:
OpenGL ES 2.0 - Fisheye shader displays a grey image
(1 answer)
Shader - Anti fisheye by "pulling" pixels
(2 answers)
Closed 1 year ago.
I am trying to create lomo fisheye effect on an image using openGL.
Should I use cube mapping and fisheye projection? Is there any open source that I can refer to?

You can draw a single quad with the image textured onto it, and use a fragment shader to warp the texture coordinate per-pixel as you desire. You'll have to do all the math yourself, but it looks like the previous post here might be a good starting point.

Taking the question title beyond effect on an image to producing "true" fisheye views, ie. usable field of view 180+ degrees...
There are two slightly different methods to adapt existing pipelines to fisheye view (with "simple" OpenGL). Both require scene rendering for up to 6 times - for each side of the "box" that will be projected to a flat screen. Each side / surface must be square - or should be, depends on the method - and likely smaller than the original full viewport.
The number of sides required depends on how wide the fisheye field of view is requested. In a typical FPS, for a FOV of 130 - three sides is enough. For a FOV up to 220 - five sides.
Method 1 - cubemap texture (GL_ARB_texture_cube_map)
init once, for a specific FOV, pre-calculate a translation table of 2d on-screen coordinates to cubemap texture 3d coordinates, 16x16 grid for the whole screen should be enough
setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_CUBE_MAP_ARB texture
iterate over screen emitting (rectangular) GL_QUAD_STRIP-s using the translation table and the cubemap.
Method 2 - 2d or rectangular textures (GL_NV_texture_rectangle)
init once, for a specific FOV, pre-calculate a "ray" table of texture + 2d texture coordinates to 2d screen coordinates
as in Method 1, setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_RECTANGLE_NV or GL_TEXTURE_2D textures
iterate over textures emitting (trapezoid) GL_QUAD_STRIP-s on the screen using the "ray" table.
The Method 1 is simpler and delivers better results.
Gotchas:
set cubemap texture wrapping to GL_CLAMP_TO_EDGE
in a typical FPS, player view is not only pitch and yaw, but also roll - calculate camera orientation for each side via proper rotation
if the render loop is combined with progress / physics / ai, this repeated scene re-rendering may confuse existing internals.
This is all of course depends on the specifics of a particular engine. I'm not sure how well this applies to OpenGL 3.3+ core profile yet the idea should be the same.
It is possible to draw the world to fisheye in one pass by doing fisheye transformaion in vertex shader. But! it requires original geometry sufficiently (pre-)tessellated. Or, it could be possible to employ Geometry Shader and/or Tessellation Shaders to organize tessellation / transform feedback on the GPU. The later should be built into the rendered from the ground up, perhaps.
For a well-isolated example using Quake 1 engine see Fisheye and Panorama OpenGL FPS and this diff specifically. Unfortunatelly, the vertex shader example is lost.

Related

How to implement a Fish-Eye projection of an image on a planar surface using OpenGL [duplicate]

This question already has answers here:
OpenGL ES 2.0 - Fisheye shader displays a grey image
(1 answer)
Shader - Anti fisheye by "pulling" pixels
(2 answers)
Closed 1 year ago.
I am trying to create lomo fisheye effect on an image using openGL.
Should I use cube mapping and fisheye projection? Is there any open source that I can refer to?
You can draw a single quad with the image textured onto it, and use a fragment shader to warp the texture coordinate per-pixel as you desire. You'll have to do all the math yourself, but it looks like the previous post here might be a good starting point.
Taking the question title beyond effect on an image to producing "true" fisheye views, ie. usable field of view 180+ degrees...
There are two slightly different methods to adapt existing pipelines to fisheye view (with "simple" OpenGL). Both require scene rendering for up to 6 times - for each side of the "box" that will be projected to a flat screen. Each side / surface must be square - or should be, depends on the method - and likely smaller than the original full viewport.
The number of sides required depends on how wide the fisheye field of view is requested. In a typical FPS, for a FOV of 130 - three sides is enough. For a FOV up to 220 - five sides.
Method 1 - cubemap texture (GL_ARB_texture_cube_map)
init once, for a specific FOV, pre-calculate a translation table of 2d on-screen coordinates to cubemap texture 3d coordinates, 16x16 grid for the whole screen should be enough
setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_CUBE_MAP_ARB texture
iterate over screen emitting (rectangular) GL_QUAD_STRIP-s using the translation table and the cubemap.
Method 2 - 2d or rectangular textures (GL_NV_texture_rectangle)
init once, for a specific FOV, pre-calculate a "ray" table of texture + 2d texture coordinates to 2d screen coordinates
as in Method 1, setup the viewport and position camera accordingly to render box sides, do the usual rendering
bind sides to the GL_TEXTURE_RECTANGLE_NV or GL_TEXTURE_2D textures
iterate over textures emitting (trapezoid) GL_QUAD_STRIP-s on the screen using the "ray" table.
The Method 1 is simpler and delivers better results.
Gotchas:
set cubemap texture wrapping to GL_CLAMP_TO_EDGE
in a typical FPS, player view is not only pitch and yaw, but also roll - calculate camera orientation for each side via proper rotation
if the render loop is combined with progress / physics / ai, this repeated scene re-rendering may confuse existing internals.
This is all of course depends on the specifics of a particular engine. I'm not sure how well this applies to OpenGL 3.3+ core profile yet the idea should be the same.
It is possible to draw the world to fisheye in one pass by doing fisheye transformaion in vertex shader. But! it requires original geometry sufficiently (pre-)tessellated. Or, it could be possible to employ Geometry Shader and/or Tessellation Shaders to organize tessellation / transform feedback on the GPU. The later should be built into the rendered from the ground up, perhaps.
For a well-isolated example using Quake 1 engine see Fisheye and Panorama OpenGL FPS and this diff specifically. Unfortunatelly, the vertex shader example is lost.

Method to fix the video-projector deformation with GLSL/HLSL full-screen shader

I am working in VR field where good calibration of a projected screen is very important, and because of difficult-to-adjust ceiling mounts and other hardware specificities, I am looking for a fullscreen shader method to “correct” the shape of the screen.
Most of 2D or 3D engines allows to apply a full-screen effect or deformation by redrawing the rendering result on a quad that you can deform or render in a custom way.
The first idea was to use a vertex shader to offset the corners of this screen quad, so the image is deformed as a quadrilateral (like the hardware keystone on a projector), but it won’t be enough for the requirements
(this approach is described on math.stackexchange with a live fiddle demo).
In my target case:
The image deformation must be non-linear most of the time, so 9 or 16 control points are needed to get a finer adjust.
The borders of the image are not straight (barrel or pillow effect), so even with few control points, the image must be distorted in a curved way in between. Otherwise the deformation would make visible linear seams between at each control points’ limits.
Ideally, knowing the corrected position of each control points of 3x3 or 4x4 grid, the way would be to define a continuous transform for the texture coordinates of the image being drawn on the full screen
quad:
u,v => corrected_u, corrected_v
You can find an illustration here.
I’ve saw some FFD algorithm that works in 2D or 3D that would allow to deform “softly” an image or mesh as if it was made of rubber, but the implementation seems heavy for a real-time shader.
I thought also of a weight-based deformation as we have in squeletal/soft-bodies animation, but seems uncertain to weight properly the control points.
Do you know a method, algorithm or general approach that could help me solve the problem ?
I saw some mesh-based deformation like the new Oculus Rift DK2 requires for its own deformations, but most of the 2D/3D engine use a single quad made of 4 vertices only in standard.
If you need non linear deformation Bezier Surfaces are pretty handy and easy to implement.
You can either pre build them in CPU, or use hardware tessellation (example provided here)
Continuing my research, I found a way.
I created a 1D RGB texture corresponding to a "ramp" or cosine values. This will be the 3 influence coefficients of offset parameters on a 0..1 axis, with 3 coefficients at 0, 0.5 and 1 :
Red starts at 1 at x=0 and goes down to 0 at x=.5
Green start at 0 at x=0, goes to 1 at x=0.5 and goes back to 0 at x=1
Blue starts at 0 at x=0.1 and goes up to 1 at x=1
With these, from 9 float2 uniforms I can interpolate very softly my parameters over the image (with 3 lookups on horizontal, and a final one for vertical).
Then, one interpolated, I offsets the texture coord with these and it works :-D
This is more or less a weighted interpolation of the coordinates using texture lookups for speedup.

OpenGL Perspective Texture Flickering

I have a very simple OpenGL (3.2) setup, no lighting, perspective projection and a simple shader program (applies projection transformation and uses texture2D to read the color from the texture).
The camera is looking down the negative z-axis and I draw a few walls and pillars on the x-y-plane with a texture (http://i43.tinypic.com/2ryszlz.png).
Now I'm moving the camera in the x-y-plane and this is what it looks like:
http://i.imgur.com/VCrNcly.gif.
My question is now: How do I handle the flickering of the wall texture?
As the camera centers the walls, the view angle onto the texture compresses the texture for the screen, so one pixel on the screen is actually several pixels on the texture, but only one is chosen for display. From the information I have access to in the shaders, I don't see how to perform an operation which interpolates the required color.
As this looks like a problem nearly every 3D application should have, the solution is probably pretty simple (I hope?).
I can't seem to understand the images, but from what you are describing you seem to be looking for MIPMAPPING. Please google it, it's a very easy and very generally used concept. You will be able to use it by adding one or two lines to your program. Good Luck. I'd be more detailed but I am out of time for today.

OpenGL 360 degree perspective

I'm looking to capture a 360 degree - spherical panorama - photo of my scene. How can I do this best? If I have it right, I can't do this the ordinary way of setting the perspective to 360.
If I would need a vertex shader, is there one available?
This is actually a nontrivial thing to do.
In a naive approach a vertex shader that transforms the vertex positions not by matrix multiplication, but by feeding them through trigonometric functions may seem to do the trick. The problem is, that this will not make straight lines "curvy". You could use a tesselation shader to add sufficient geometry to compensate for this.
The most straightforward approach is two-fold. First you render your scene into a cubemap, i.e. render with a 90°×90° FOV into the 6 directions making up a cube. This allows you to use regular affine projections rendering the scene.
In a second step you use the generated cubemap to texture a screen filling grid, where the texture coordinates of each vertex are azimuth and elevation.
Another approach is to use tiled rendering with very small FOV and rotating the "camera", kind of like doing a panoramic picture without using a wide angle lens. As a matter of fact the cubemap based approach is tiled rendering, but its easier to get right than trying to do this directly with changed camera direction and viewport placement.

Background image in OpenGL

I'm doing a 3D asteroids game in windows (using OpenGL and GLUT) where you move in space through a bunch of obstacles and survive. I'm looking for a way to set an image background against the boring bg color options. I'm new to OpenGL and all i can think of is to texture map a sphere and set it to a ridiculously large radius. What is the standard way of setting image bg in a 3d game?
The standard method is to draw two texture mapped triangles, whose coordinates are x,y = +-1, z=0, w=1 and where both camera and perspective matrices are set to identity matrix.
Of course in the context of a 'space' game, where one could want the background to rotate, the natural choice is to render a cube with cubemap (perhaps showing galaxies). As the depth buffering is turned off during the background rendering, the cube doesn't even have to be "infinitely" large. A unit cube will do, as there is no way to find out how close the camera is to the object.