Lens shader / Image disortion - opengl

Well, i have a 3d scene currently with just a quad (painting) with texture on it. Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens: distorting the picture "below" it
how would one achieve it preferably with a shader and some pixelbuffers?

Here is an example I found a while ago which does something very similar to what you want. http://www.paulsprojects.net/opengl/refract/refract.html
You will probably have to modify the code a bit to achieve the inversion effect you want, but this will get you started on the right track.
Edit:
By the way, you will not need the second image (the inverted small rectangle). Just use a single background image and the shader.

Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens:
This is a tricky one. First one must understand that OpenGL is a so called localized rendering model rasterizer, which means in layman terms, that it works like pencils and brushes on a canvas.
It thus works in very contrast to global scene representation renderers like raytracers. A raytracer actually operates on a fully defined scene, because of that it can to things like refraction trivially.
Indeed one must treat OpenGL like an artist treats its tools. So any optical "effect" you want to create must be implemented by mastering various drawing techiques possible with the tools OpenGL offers. To create the effect you desire you must implement a multistage process.
For refraction you first render the scene as "seen" by the refracting object in all directions (you create a dynamic cube map), then you use this cube map as input data for rasterizing the "refracting" object, where a shader is used to determine the refracted direction of a ray of light hitting the rasterized fragments.
BTW: What holds for refraction holds for any other like interacting effect. Shadows are as non-trivial like refractions in OpenGL.

Related

How to simulate mathimatically correct shadows of transparent objects?

I want to simulate shadows casted by complex and composite transparent objects.
This shadows must be mathematically correct for particular light source (at least for point light). I think this is true for any graphical library, is it?
Than, there must NOT be any refraction at all.
This image is not what I actually want to get of course.
Does OpenGL can do this? If it can not then what should I use instead?
UPD. So I need some path tracer. Is there some wich I could use programmatically: give it file of 3d-scene with objects and get the result of tracing?
This shadows must be mathematically correct
There's no such thing as a mathematically correct or wrong illumination. What you mean is physically correct.
Images like you want to create them rely on light propagation. The only way to properly simulate light propagation is to shoot virtual photons into a scene and follow their path. This is called path tracing.
Does OpenGL can do this?
OpenGL just draws points, lines and triangles… one at a time, without any concept of a scene or models.
Old, fixed function pipeline OpenGL had a simple Blinn illumination model built in, but this did just calculate a "light" value per vertex based on surface orientation (normal) and position relative to a light source.
Modern OpenGL does not even do that. Instead it relies on the programmer to provide programs that are executed for every vertex to decide where in the picture it goes and for every fragment (roughly a pixel) drawn to determine which color to give it.
In this programs, called shaders you can do just about anything. So if you want to implement a path tracer using OpenGL shaders, you can most certainly do this. But this path tracer will not interact with the points, lines and triangles you draw. Instead these will just serve to define the boundaries within which the shaders do their computations.
If it can not then what should I use instead?
It's not so much a question of if it is possible, but how easy it is to implement. In your case OpenGL is certainly not the right programming environment, because you'd be essentially starting from scratch. Instead you should use one of the existing path tracers around. There are also some, that are GPU accelerated.

OpenGL Perspective Texture Flickering

I have a very simple OpenGL (3.2) setup, no lighting, perspective projection and a simple shader program (applies projection transformation and uses texture2D to read the color from the texture).
The camera is looking down the negative z-axis and I draw a few walls and pillars on the x-y-plane with a texture (http://i43.tinypic.com/2ryszlz.png).
Now I'm moving the camera in the x-y-plane and this is what it looks like:
http://i.imgur.com/VCrNcly.gif.
My question is now: How do I handle the flickering of the wall texture?
As the camera centers the walls, the view angle onto the texture compresses the texture for the screen, so one pixel on the screen is actually several pixels on the texture, but only one is chosen for display. From the information I have access to in the shaders, I don't see how to perform an operation which interpolates the required color.
As this looks like a problem nearly every 3D application should have, the solution is probably pretty simple (I hope?).
I can't seem to understand the images, but from what you are describing you seem to be looking for MIPMAPPING. Please google it, it's a very easy and very generally used concept. You will be able to use it by adding one or two lines to your program. Good Luck. I'd be more detailed but I am out of time for today.

2D Water top surface profile

I am trying to create the effect of the water surface thickness with a vertex-fragment shader.
I am in a 3D game environment but It's a scroll view so a "2D" view.
Here is a good tutorial of creating such effect in real 2D using fragment shader.
But this can't be used in my case I think.
For the moment I have only a plane were I apply refraction.
And I want to apply the water thickness effect. But I don't know how to do it.
I am not trying to create some water deformation/displacement using vertex for the moment, this is not the point.
I don't know if it's possible with a simple quad maybe should I use an object like this.
Here are some examples.
Thanks a lot !
[EDIT] Added Rayman water effect to have a better reference of the effect.
I am trying to create 2D Water effect with a vertex-fragment shader on a simple quad.
Your first misconception is thinking in 2D. What you see in your right picture is the interaction of light with a 2 surface in a 3D space. A simple quad will not suffice.
For water you need some surface displacement. You can either simulate this by solving some wave equation. Or you're using a fourier transform based approach. I suggest the second. Next you render your scene "regular" for everything above the water, then "murky and refracted" for everything below the water line. Render both to textures.
Then You render the water surface. When looking at the Air→Water Interface (i.e. from above) use a Fresnel reflection term, i.e. mix between top reflection and see through depending on the angle of incidence, and for a too small angle emulate Brewster reflection. For the Water→Air Interface (i.e. from below) you do similar, only you don't need the Fresnel term, but only the Brewster term, to account for total internal reflection.
Since you do all mixing in the fragment shader, you don't need blending, hence no need to sort drawing operations for the water depth.
Yes, rendering water is not trivial.

rendered 3D Scene to point cloud

Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!

Terrain minimap in OpenGL?

So I have what is essentially a game... There is terrain in this game. I'd like to be able to create a top-down view minimap so that the "player" can see where they are going. I'm doing some shading etc on the terrain so I'd like that to show up in the minimap as well. It seems like I just need to create a second camera and somehow get that camera's display to show up in a specific box. I'm also thinking something like a mirror would work.
I'm looking for approaches that I could take that would essentially give me the same view I currently have, just top down... Does this seem feasible? Feel free to ask questions... Thanks!
One way to do this is to create an FBO (frame buffer object) with a render buffer attached, render your minimap to it, and then bind the FBO to a texture. You can then map the texture to anything you'd like, generally a quad. You can do this for all sorts of HUD objects. This also means that you don't have to redraw the contents of your HUD/menu objects as often as your main view; update the the associated buffer only as often as you require. You will often want to downsample (in the polygon count sense) the objects/scene you are rendering to the FBO for this case. The functions in the API you'll want to check into are:
glGenFramebuffersEXT
glBindFramebufferEXT
glGenRenderbuffersEXT
glBindRenderbufferEXT
glRenderbufferStorageEXT
glFrambufferRenderbufferEXT
glFrambufferTexture2DEXT
glGenerateMipmapEXT
There is a write-up on using FBOs on gamedev.net. Another potential optimization is that if the contents of the minimap are static and you are simply moving a camera over this static view (truly just a map). You can render a portion of the map that is much larger than what you actually want to display to the player and fake a camera by adjusting the texture coordinates of the object it's mapped onto. This only works if your minimap is in orthographic projection.
Well, I don't have an answer to your specific question, but it's common in games to render the world to an image using an orthogonal perspective from above, and use that for the minimap. It would at least be less performance intensive than rendering it on the fly.