How to get the depth of field effect using OpenGL? - opengl

I am wondering how to implement the "depth of field/circle of confusion" effect using OpenGL?
Is there any built-in method or library to support it?

You will not find anything "built in" to OpenGL that will give you what you are looking for. You will have to implement this effect through a shader, which is fairly straightforward.
An article on how to achieve this effect is freely available here:
Nvidia article on depth of field techniques

You can compute different DOF approximations. For a simple approximation, you could try to render near object into a texture and far object into another texture. In another pass you could blur the texture holding the image of the far objects and then combine both textures to a single texture and wrap it on a screen space rect. This has nothing to do with the actual DOF but in real-time graphics often little tricks are used - but the visual outcome has to be convincing though.

Related

How to use the hardware's 3D texture sampling with a flipbook style volume texture?

A question sort of addressing the problem and another question asking a related question.
I have a 2D texture that has 12x12 slices of a volume layered in a grid like this:
What I am doing now is to calculate the offset and sampling based of the 3D coordinate inside the volume using HLSL code myself. I have followed the descriptions found here and here, where the first link also talks about 3D sampling from a 2D sliced texture. I have also heard that modern hardware have the ability to sample 3D textures.
That being said, I have not found any description or example code that samples the 3D texture. What HLSL, or OpenGL, function can I use to sample this flipbook type of texture? If you can, please add a small example snippet with explanations. If you cant, pointing me to one or the documentation would be appreciated. I have found no sampler function where I can provide the number of layers in the U and V directions so I dont see how it can sample without knowing how many slices are per axis.
If I am misunderstanding this completely I would also appreciate being told so.
Thank you for your help.
OpenGL has support for true 3D textures for ages (actually 3D texture support already appeared in OpenGL-1.2). With that you upload your 3D texture not as a "flipbook" but simply as a stack of 2D images, using the function glTexImage3D. In GLSL you then just use the regular texture access function, but with a sampler3D and a 3 component texture coordinate vector (except in older versions of GLSL, i.e. before GLSL-1.5/OpenGL-3 where you use texture3D).

opengl - possibility of a mirroring shader?

Until today, when I wanted to create reflections (a mirror) in opengl, I rendered a view into a texture and displayed that texture on the mirroring surface.
What i want to know is, are there any other methods to create a mirror in opengl?
And 2. can this be done lonely in shaders (e.g. geometry shader) ?
Ray-tracing. You can write a ray-tracer in the fragment shader (every fragment follows a ray). Ray-tracers can perfectly deal with reflection (mirroring) on all kinds of surfaces.
You can find an OpenGL example here and a WebGL example including mirroring here.
There are no universal way to do that, in any 3D API i know of.
Depending on your case there are several possible techniques with different downsides.
Planar reflections: That's what you are doing already.
Note that your mirror needs to be flat and you have to clip so anything closer than the mirror ins't rendered into the texture.
Good old cubemaps: attach a cubemap to each mirror then sample it in the reflection direction. This works for any surface but you will need to render the cubemaps (which can be done only once if you don't care about moving objects being reflected). I don't think you can do this without shaders but only the mirror will need one. Its a very common technique as it's easy do implement, can be dynamic and fairly cheap while being easy to integrate into an existing engine.
Screen space ray-marching: It's what danny-ruijters suggested. Kind of like SSAO : for each pixel, sample the depth buffer along the reflection vector until you hit something. This has the advantage to be applicable anywhere (on arbitrary complex surfaces) however it can only reflect stuff that appear on screen which can introduce lots of small artifacts but it's completly dynamic and very simple to implement. Note that you will need an additional pass (or rendering normals into a buffer) to access your scene final color in while computing the reflections. You absolutely need shaders for that, but it's post process so it won't interfere with the scene rendering if that's what you fear.
Some modern game engines use this to add small details to reflective surfaces without the burden of having to compute/store cubemaps.
They are probably many other ways to render mirrors but these are the tree main one (at least for what i know) ways of doing reflections.

Why does a texture used on opengl objects looks poor in quality and stretched?

Here is the reference of library I am using
The texture when pasted over a opengl wall looks low in resolution and has poor quality, how can I improve it?
code
ground->m_texture = new cTexture2D();
fileload = ground->m_texture->loadFromFile(RESOURCE_PATH("resources/images/shadow.bmp"));
ground->setUseTexture(true);
ground->m_texture->setSphericalMappingEnabled(true);//this line is for circular objects, but without it texture doesnt even show up
from an example - how it should look
How it looks in my implementation
Okay, what's happening is the following: Spherical mapping generates texture coordinates based on the vertex-to-viewport vector and the normal at the vertex to map it into a spherical reflection directions mapped into a sort-of fisheye image. Since your geometry looks like it's quite flat, the variation of texture coordinates generated by this method will be rather small, which means, that you're largely magnifying your image. If now your texture filtering mode is set to nearest filtering, this is what will happen.
Solution: don't use spherical texture mapping. If you want to emulate a reflection, use cubemaps (they behave much better for small deviations in the reflection vector) and switch to linear filtering mode.

Is there a way of keeping track of the relationship between vertices and pixels using either OpenGL or DirectX? 11

I would like to know if there is a way to generate a single static image of a 3D object (1 single object represented as a triangle list), using OpenGL or DirectX, that allows you to know which specific triangles defining the object have been used to generate every one of the pixels forming the rendered image. I've cited OpenGL and DirectX because they are widely used APIs graphics if somebody knows other ways of achieving the previous that works at high speed I would be also interested in his/her answer. I currently use my own software implementation of the rendering pipeline to keep track of the relationship, but I would like to use the power and effects (mainly antialiasing, shadows and specific skin rendereing techniques) that graphics cards offer.
Thanks very much for your help
Sure, just output a triangle identifier to a separate render-target (using MRT). In GLSL-terms, this is gl_PrimitiveID, and in HLSL-terms it's SV_PrimitiveID. If you are using multi-sampling, then your multi-sample buffer for that render-target become a list of primitives that contribute to each pixel.
Draw each triangle in a different colour. R8G8B8 offers you about 16.7 million possible colours, so one can index that number of triangles with it. You don't have to draw to a on-screen buffer. You could render the picture as usual, and render to a second target, indexing the triangles in a off-screen buffer.

OpenGL, How to create a "bumpy Polygon"?

I am unsure of how to describe what I'm after, so I drew a picture to help:
My question, is it possible within OpenGL to create the illusion of those pixel looking bumps on a single polygon, without having to resort to using many polygons? And if it is, what's the method?
I think what your looking for is actually Parallax mapping (Or Parallax Occlusion mapping).
Demos:
http://www.youtube.com/watch?v=01owTezYC-w
http://www.youtube.com/watch?v=gcAsJdo7dME&NR=1
http://www.youtube.com/watch?v=njKdLvmBl88
Parralax mapping basically works by using the height map to alter the texture UV coordinate being used.
The main disadvantage to parallax is that anything that appears to be 'outside' the polygon will be clipped (think of looking at an image on a 3D tv), so it's best for things indented in a surface rather than sticking out of it (although you can reduce this by making the polygon lager than the visible texture area). It's also fairly complex and would need to be combined with other shader techniques for a good effect.
Bump mapping works by using a texture for normal's, this makes the light's shading appear to be 3D however it does not change 3D data depending on the position of the viewer only the shading. Bump mapping would also be fairly useless for the OP's sample image since the surface is all the same angle just at different heights, bump mapping relies on the changes in the surfaces angles. You would have to slope the edges like this.
Displacement mapping/tessellation uses a texture to generate more polygons rather than just being 1 polygon.
There's a video comparing all 3 here
EDIT: There is also Relief mapping, which is a similar to parallax. See demo. There's a comparison video too (it's a bit lowquality but relief looks like it gives better depth).
I think what you're after is bump mapping. The link goes to a simple tutorial.
You may also be thinking of displacement mapping.
Of the techniques mentioned in other people's answers:
Bump mapping is the easiest to achieve, but doesn't do any occlusion.
Parallax mapping is probably the most complex to achieve, and doesn't work well in all cases.
Displacement mapping requires high-end hardware and drivers, and creates additional geometry.
Actually modeling the polygons is always an option.
It really depends on how close you expect the viewer to be and how prominent the bumps are. If you're flying down the Death Star trench, you'll need to model the bumps or use displacement mapping. If you're a few hundred meters up, bumpmapping should suffice.
If you have DX11 class hardware then you could tessellate the polygon and then apply displacement mapping. See http://developer.nvidia.com/node/24. But then it gets a little complicated to get it running and develop something on top of it.