OpenGL ES Texture Coordinates Slightly Off - c++

I'm trying to draw a subregion of a texture in OpenGL by specifying the coordinates I want. What's happening though is that, depending on the size of the image, it seems there's a slight offset in the origin of where it selects the texture coordinates. The offset amount seems to be less than the size of a pixel & the output is is blurred combination of neighboring pixels.
Here's an idea of what I'm describing. In this case I'd want to select the 6x5 green/white region but what OpenGL is rendering includes a slight pink tint to the top & left pixels.
What the output would look like:
I can fix it by adding an offset to the texture coordinates before passing them to glTexCoordPointer but the problem is that I have no way to calculate what the offset is and it seems different for different textures.
Pseudocode:
float uFactor = regionWidth / textureWidth; // For the example: 0.6f
float vFactor = regionHeight / textureHeight; // For the example: 0.5f
data[0].t[0] = 0.0f * uFactor;
data[0].t[1] = 0.0f * vFactor;
data[1].t[0] = 1.0f * uFactor;
data[1].t[1] = 0.0f * vFactor;
data[2].t[0] = 0.0f * uFactor;
data[2].t[1] = 1.0f * vFactor;
data[3].t[0] = 1.0f * uFactor;
data[3].t[1] = 1.0f * vFactor;
glPushMatrix();
// translate/scale/bind operations
glTexCoordPointer(2, GL_FLOAT, 0, data[0].t);

Keep in mind, that OpenGL samples textures at the texel centers. So when using linear filtering (like GL_LINEAR or GL_LINEAR_MIPMAP_LINEAR) the exact texel color is only returned if sampled at the texel center. Thus, when you only want to use a sub-region of a texture, you need to indent your texture coordinates by half a texel (or 0.5/width and 0.5/height). Otherwise the filtering will blend the border of the texture with neigbouting texels outside of your intended region. This causes your slightly pinkish border. If you use up the whole texture this effect is compensated by the GL_CLAMP_TO_EDGE wrapping mode, but when using a subregion, GL does not know where it's edge is and that filtering should not cross it.
So when you got a subregion of the texture in the range [s1,s2]x[t1,t2] (0 <= s,t <= 1), the real valid texCoord interval should be [s1+x,s2-x]x[t1+y,t2-y] with x being 0.5/width and y being 0.5/height (the width and height of the whole texture corresponding to [0,1]x[0,1]).
Therefore try
data[0].t[0] = 0.0f * uFactor + 0.5/textureWidth;
data[0].t[1] = 0.0f * vFactor + 0.5/textureHeight;
data[1].t[0] = 1.0f * uFactor - 0.5/textureWidth;
data[1].t[1] = 0.0f * vFactor + 0.5/textureHeight;
data[2].t[0] = 0.0f * uFactor + 0.5/textureWidth;
data[2].t[1] = 1.0f * vFactor - 0.5/textureHeight;
data[3].t[0] = 1.0f * uFactor - 0.5/textureWidth;
data[3].t[1] = 1.0f * vFactor - 0.5/textureHeight;

Propably this has to do with the wrap around. Try this when you create the source image:
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP )
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP )

Related

DirectX, scaling orthgraphic vertices to world space vertices

Introduction
Let's say I have the following vertices:
const VERTEX World::vertices[ 4 ] = {
{ -960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f }, // side 1 screen coordinates centred
{ 960.0f, -600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f },
{ -960.0f, 600.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f },
{ 960.0f, 960.0f, 0.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f }
};
You may have guessed that 960 * 2 is 1920.. which is the width of my screen and same goes for 600 * 2 being 1200.
These vertices represent a rectangle that will fill up my ENTIRE screen where the origin is in the centre of my screen.
Issue
So up until now, I have been using an Orthographic view without a projection:
matView = XMMatrixOrthographicLH( Window::width, Window::height, -1.0, 1.0 );
Any matrix that was being sent to the screen was multiplied by matView and it seemed to work great. More specifically, my image; using the above vertices array, fit snugly in my screen and was 1:1 pixels to its original form.
Unfortunately, I need 3D now... and I just realised i'm going to need some projection... so I prepared this little puppy:
XMVECTOR vecCamPosition = XMVectorSet( 0.0f, 0.0f, -1.0f, 0 ); // I did try setting z to -100.0f but that didn't work well as I can't scale it back any more... and it's not accurate to the 1:1 pixel I want
XMVECTOR vecCamLookAt = XMVectorSet( 0.0f, 0.0f, 0.0f, 0.0f );
XMVECTOR vecCamUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f );
matView = XMMatrixLookAtLH( vecCamPosition, vecCamLookAt, vecCamUp );
matProjection = XMMatrixPerspectiveFovLH(
XMConvertToRadians( 45 ), // the field of view
Window::width / Window::height, // aspect ratio
1.0f, // the near view-plane
100.0f ); // the far view-plan
You may already know what the problem is... but if not, I have just set my field of view to 45 degrees.. this'll make a nice perspective and do 3d stuff great, but my vertices array is no longer going to cut the mustard... because the fov and screen aspect have been greatly reduced (or increased ) so the vertices are going to be far too huge for the current view I am looking at (see image)
I was thinking that I need to do some scaling to the output matrix to scale the huge coordinates back down to the respective size my fov and screen aspect is now asking for.
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
Goal
I'm trying to avoid changing every single objects vertices array to a rough scaled prediction of what the original image would look like in world space.
Some extra info
I just wanted to clarify what kind of matrix operations I am currently doing to the world and then how I render using those changes... so this is me doing some translations on my big old background image:
// flipY just turns the textures back around
// worldTranslation is actually the position for the background, so always 0,0,0... as you can see 0.5 was there to make sure I ordered things that were drawn correctly when using orthographic
XMMATRIX worldTranslation = XMMatrixTranslation( 0.0f, 0.0f, 0.5f );
world->constantBuffer.Final = flipY * worldTranslation * matView;
// My thoughts are on this that somehow I need to add a scaling to this matrix...
// but if I add scaling here... it's going to mean every other game object
// (players, enemies, pickups) are going to need to be scaled... and really I just want to
// have to scale the whole lot the once.. right?
And finally this Is where it is drawn to the screen:
// Background
d3dDeviceContext->PSSetShaderResources( 0, 1, world->textures[ 0 ].GetAddressOf( ) ); // Set up texture of background
d3dDeviceContext->IASetVertexBuffers( 0, 1, world->vertexbuffer.GetAddressOf( ), &stride, &offset ); // Set up vertex buffer (do I need the scaling here?)
d3dDeviceContext->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); // How the vertices be drawn
d3dDeviceContext->IASetIndexBuffer( world->indexbuffer.Get( ), DXGI_FORMAT_R16_UINT, 0 ); // Set up index buffer
d3dDeviceContext->UpdateSubresource( constantbuffer.Get( ), 0, 0, &world->constantBuffer, 0, 0 ); // set the new values for the constant buffer
d3dDeviceContext->OMSetBlendState( blendstate.Get( ), 0, 0xffffffff ); // DONT FORGET IF YOU DISABLE THIS AND YOU WANT COLOUR, * BY Color.a!!!
d3dDeviceContext->DrawIndexed( ARRAYSIZE( world->indices ), 0, 0 ); // draw
and then what I have done to apply my matProjection which has supersized all my vertices
world->constantBuffer.Final = flipY * worldTranslation * matView * matProjection; // My new lovely projection and view that make everything hugeeeee... I literally see like 1 pixel of the background brah!
Please feel free to take a copy of my game and run it as is (Windows 8 application Visual studio 2013 express project) in the hopes that you can help me out with putting this all into 3D: https://github.com/jimmyt1988/FlyGame/tree/LeanerFramework/Game
its me again. Let me try to clear a few things up
1
Here is a little screenshot from an editor of mine:
I have edited in little black boxes to illustrate something. The axis circles you see around the objects are rendered at exactly the same size. However, they are rendered through a perspective projection. As you can see, the one on the far left is something like twice as large as the one in the center. This is due purely to the nature of a projection like that. If this is unacceptable, you must use a non-perspective projection.
2
The only way it is possible to maintain a 1:1 ratio of screenspace to uv space is to have the object rendered at 1 pixel on screen per 1 pixel on texture. There is nothing more to it than that. However, what you can do is change your texture filter options. Filter options are designs specifically for rendering non 1:1 ratios. For example, the code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
Tells opengl: "If you are told to sample a pixel, don't interpolate anything. Just take the nearest value on the texture and paste it on the screen.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
This code, however, does something much better: it interpolates between pixels. I think this may be what you want, if you aren't already doing it.
These pictures (taken from http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/) show this:
Nearest:
Linear:
Here is what it comes down to. You request that:
What must I do to use the vertices array as it is (1:1 pixel ratio to the original image size) while still allowing 3d stuff to happen like have a fly fly around the screen and rotate and go closer and further into the frustrum etc...
What you are requesting is by definition not possible, so you have to look for alternative solutions. I hope that helps somewhat.

Opaque OpenGL textures have transparent border

My problem concerns rendering text with OpenGL -- the text is rendered into a texture, and then drawn onto a quad. The trouble is that the pixels on the edge of the texture are drawn partially transparent. The interior of the texture is fine.
I'm calculating the texture coordinates to hit the center of my texels, using NEAREST (non-)interpolation, setting the texture wrapping to CLAMP_TO_EDGE, and setting the projection matrix to place my vertices at the center of the viewport pixels. Still seeing the issue.
I'm working on VTK with their texture utilities. These are the GL calls that are used to load the texture, as determined by stepping through with a debugger:
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Create and bind pixel buffer object here (not shown, lots of indirection in VTK)...
glTexImage2D( GL_TEXTURE_2D, 0 , GL_RGBA, xsize, ysize, 0, format, GL_UNSIGNED_BYTE, 0);
// Unbind PBO -- also omitted
glBindTexture(GL_TEXTURE_2D, id);
glAlphaFunc (GL_GREATER, static_cast<GLclampf>(0));
glEnable (GL_ALPHA_TEST);
// I've also tried doing this here for premultiplied alpha, but it made no difference:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
The rendering code:
float p[2] = ...; // point to render text at
int imgDims[2] = ...; // Actual dimensions of image
float width = ...; // Width of texture in image
float height = ...; // Height of texture in image
// Prepare the quad
float xmin = p[0];
float xmax = xmin + width - 1;
float ymin = p[1];
float ymax = ymin + height - 1;
float quad[] = { xmin, ymin,
xmax, ymin,
xmax, ymax,
xmin, ymax };
// Calculate the texture coordinates.
float smin = 1.0f / (2.0f * (imgDims[0]));
float smax = (2.0 * width - 1.0f) / (2.0f * imgDims[0]);
float tmin = 1.0f / (2.0f * imgDims[1]);
float tmax = (2.0f * height - 1.0f) / (2.0f * imgDims[1]);
float texCoord[] = { smin, tmin,
smax, tmin,
smax, tmax,
smin, tmax };
// Set projection matrix to map object coords to pixel centers
// (modelview is identity)
GLint vp[4];
glGetIntegerv(GL_VIEWPORT, vp);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
float offset = 0.5;
glOrtho(offset, vp[2] + offset,
offset, vp[3] + offset,
-1, 1);
// Disable polygon smoothing. Why not, I've tried everything else?
glDisable(GL_POLYGON_SMOOTH);
// Draw the quad
glColor4ub(255, 255, 255, 255);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, points);
glTexCoordPointer(2, GL_FLOAT, 0, texCoord);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
// Restore projection matrix
glMatrixMode(GL_PROJECTION);
glPopMatrix();
For debugging purposes, I've overwritten the outermost texels with red, and the next inner layer of texels with green (otherwise it's hard to see what's going on in the mostly-white text image).
I've inspected the texture in-memory using gDEBugger, and it looks as expected -- bright red and green borders around the texture area (the extra empty space is padding to make its size a power of two). For reference:
Here's what the final rendered image looks like (magnified 20x -- the black pixels are remnants of the text that was rendered under the debugging borders). Pale red border, but still a bold green inner border:
So it is just the outer edge of pixels that is affected. I'm not sure if it's color-blending or alpha-blending that's screwing things up, I'm at a loss. I've noticed that the corner pixels are twice as pale as the edge pixels, perhaps that's significant... Maybe someone here can spot the error?
Could be a "pixel perfect" problem. OpenGL defines the center of a line to be the spot that gets rasterized into a pixel. The middle is exactly half way between 1 integer and the next... to get pixel (x,y) to display "pixel perfect"... fix up your coordinates to be:
x=(int)x+0.5f; // x is a float.. makes 0.0 into 0.5, 16.343 into 16.5, etc.
y=(int)y+0.5f;
This probably is what is messing up the blending. I had the same issues with texture modulating... a single somewhat dimmer line or series of pixels at the bottom and right edges.
Okay, I've worked on it for the last few days. There were few ideas that didn't work at all. The only one that worked is to admit that this "Perfect Pixel" exists and try to trick it. Bad That I can't vote up for your answer Cosmic Bacon. But your answer, even if it looks good -- will a little bit ruin everything in a special programs like Games. My answer -- is improved yours.
Here's the solution:
Step1: Make a method that draws texture that you need and use only it for drawing. And Add 0.5f to every coordinate. Look:
public void render(Texture tex,float x1,float y1,float x2,float y2)
{
tex.bind();
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex2f(x1+0.5f,y1+0.5f);
GL11.glTexCoord2f(1,0);
GL11.glVertex2f(x2+0.5f,y1+0.5f);
GL11.glTexCoord2f(1,1);
GL11.glVertex2f(x2+0.5f,y2+0.5f);
GL11.glTexCoord2f(0,1);
GL11.glVertex2f(x1+0.5f,y2+0.5f);
GL11.glEnd();
}
Step2: If you're going to use "glTranslatef(somethin1,somethin2,0)" it will be nice to make a method that overcomes "Translatef" and doesn't let camera to move on fractional distance. Cause if there will be a little chance that Camera moves on, let's say, 0.3 -- Sooner or later you'll see this issue again(multiple times, i suppose). Next code makes camera follow the Object that has X and Y. And Camera will never loose the object from it's sight:
public void LookFollow(Block AF)
{
float some=5;//changing me will cause camera to move faster/slower
float mx=0,my=0;
//Right-Left
if(LookCorX!=AF.getX())
{
if(AF.getX()>LookCorX)
{
if(AF.getX()<LookCorX+2)
mx=AF.getX()-LookCorX;
if(AF.getX()>LookCorX+2)
mx=(AF.getX()-LookCorX)/some;
}
if(AF.getX()<LookCorX)
{
if(2+AF.getX()>LookCorX)
mx=AF.getX()-LookCorX;
if(2+AF.getX()<LookCorX)
mx=(AF.getX()-LookCorX)/some;
}
}
//Up-Down
if(LookCorY!=AF.getY())
{
if(AF.getY()>LookCorY)
{
if(AF.getY()<LookCorY+2)
my=AF.getY()-LookCorY;
if(AF.getY()>LookCorY+2)
my=(AF.getY()-LookCorY)/some;
}
if(AF.getY()<LookCorY)
{
if(2+AF.getY()>LookCorY)
my=AF.getY()-LookCorY;
if(2+AF.getY()<LookCorY)
my=(AF.getY()-LookCorY)/some;
}
}
//Evading "Perfect Pixel"
mx=(int)mx;
my=(int)my;
//Moving Camera
GL11.glTranslatef(-mx,-my,0);
//Saving up Position of camera.
LookCorX+=mx;
LookCorY+=my;
}
float LookCorX=300,LookCorY=200; //camera's starting position
As the result -- we receive a camera that moves a little sharper, cause steps can't be less than 1 pixel, and sometimes, it's necessary to make a smaller step, but textures are looking okay, and, it's -- a Great Progress!
Sorry for a real Big Answer. I'm still working on a Good Solution. Once I'll find something better and shorter -- this will be erased by me.

DX11 Alpha blending when rendering to a texture

FINAL EDIT:
Resolved... just needed to learn how alpha blending works in-depth. I should have had:
oBlendStateDesc.RenderTarget[a].DestBlendAlpha = D3D11_BLEND_ZERO;
...set to D3D11_BLEND_ONE to preserve the alpha.
When rendering to the backbuffer once the problem would not be noticed as the colours blend normal and that is the final output. When rendering to the texture the same thing applies, just then rendering the texture to the backbuffer the incorrect alpha plays a role in incorrectly blending the texture into the backbuffer.
I then ran into another issue where the alpha seemed to be decreasing. This is because the colour is blended twice, for example...
Source.RBGA = 1.0f, 0.0f, 0.0f, 0.5f
Dest.RGBA = 0.0f, 0.0f, 0.0f, 0.0f
Render into texture...
Result.RGB = Source.RBG * Source.A + Dest.RGB * (1 - Source.A) = 0.5f, 0.0f, 0.0f
Result.A = Source.A * 1 + Dest.A * 1 = 0.5f
Now...
Source.RBGA = 0.5f, 0.0f, 0.0f, 0.5f
Dest.RGBA = 0.0f, 0.0f, 0.0f, 0.0f
Render into backbuffer...
Result.RGB = Source.RBG * Source.A + Dest.RGB * (1 - Source.A) = 0.25f, 0.0f, 0.0f
Result.A = Source.A * 1 + Dest.A * 1 = 0.5f
To resolve this, when rendering the texture into the backbuffer I use the same blendstate but change the SrcBlend to D3D11_BLEND_ONE so the colour is not blended twice.
Hopefully this helps anyone else having a similar problem....
EDITEND
To increase performance I'm attempting to render a string of text that never changes into a texture to save rendering each individual character every time.
Since I'm rendering strictly in 2D, I've disabled the depth & stencil testing while enabling alpha blending.
Problem is there doesn't seem to be any alpha blending happening, whatever is drawn last overwrites the current pixel with its own data... no blending.
I use a single blend state which I do not change. When rendering to the backbuffer the blending works fine. When rendering the final texture to the backbuffer the blending also works fine. It's just when I render to the texture that blending seems to fail.
Here's how I set up my single blend state:
D3D11_BLEND_DESC oBlendStateDesc;
oBlendStateDesc.AlphaToCoverageEnable = 0;
oBlendStateDesc.IndependentBlendEnable = 0; //set to false, dont need loop below... but just incase
for (unsigned int a = 0; a < 8; ++a)
{
oBlendStateDesc.RenderTarget[a].BlendEnable = 1;
oBlendStateDesc.RenderTarget[a].SrcBlend = D3D11_BLEND_SRC_ALPHA;
oBlendStateDesc.RenderTarget[a].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
oBlendStateDesc.RenderTarget[a].BlendOp = D3D11_BLEND_OP_ADD;
oBlendStateDesc.RenderTarget[a].SrcBlendAlpha = D3D11_BLEND_ONE;
oBlendStateDesc.RenderTarget[a].DestBlendAlpha = D3D11_BLEND_ZERO;
oBlendStateDesc.RenderTarget[a].BlendOpAlpha = D3D11_BLEND_OP_ADD;
oBlendStateDesc.RenderTarget[a].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
}
// Create the blend state from the description
HResult = m_poDevice->CreateBlendState(&oBlendStateDesc, &m_poBlendState_Default);
m_poDeviceContext->OMSetBlendState(m_poBlendState_Default, nullptr, 0xffffff);
Are there any extra steps I am missing to enable blending when rendering to a texture?
EDIT: If I set AlphaToCoverageEnable to true it blends, but looks terrible. That at least confirms it is using the same blend state... just works differently depending on when rendering to backbuffer or a texture : / Here's my texture desc...
m_oTexureDesc.Width = a_oDesc.m_uiWidth;
m_oTexureDesc.Height = a_oDesc.m_uiHeight;
m_oTexureDesc.MipLevels = 1;
m_oTexureDesc.ArraySize = 1;
m_oTexureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
m_oTexureDesc.SampleDesc.Count = 1; //No sampling
m_oTexureDesc.SampleDesc.Quality = 0;
m_oTexureDesc.Usage = D3D11_USAGE_DEFAULT; //GPU writes & reads
m_oTexureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
m_oTexureDesc.CPUAccessFlags = 0;
m_oTexureDesc.MiscFlags = 0;
EDIT:
Here's some visualization...
Rendering to backbuffer - AlphaBlending enabled.
Rendering to texture - AlphaBlending enabled.
Rendering to backbuffer - AlphaBlending disabled.
Letter T taken from the font file
*When rendering with AB disabled, the letters match exactly (compare 4 & 3)
*When rendering to the backbuffer with AB enabled, the letters render slightly (hardly noticeable) washed out but still blend (compare 4 & 1)
*When rendering to a texture with AB enabled, the letters render even more noticeably washed out while not blending at all. (compare 4 & 2)
Not sure why the colours are washed out with alpha blending enabled... but maybe its a clue?
EDIT:
If I clear the render target texture to say... 0.0f, 0.0f, 1.0f, 1.0f (RGBA, blue)... this is the result:
Only the pixels with alpha > 0.0f & < 1.0f blend with the colour. Another clue but I have no idea how to resolve this issue...

Omnidirectional shadow mapping with depth cubemap

I'm working with omnidirectional point lights. I already implemented shadow mapping using a cubemap texture as color attachement of 6 framebuffers, and encoding the light-to-fragment distance in each pixel of it.
Now I would like, if this is possible, to change my implementation this way:
1) attach a depth cubemap texture to the depth buffer of my framebuffers, instead of colors.
2) render depth only, do not write color in this pass
3) in the main pass, read the depth from the cubemap texture, convert it to a distance, and check whether the current fragment is occluded by the light or not.
My problem comes when converting back a depth value from the cubemap into a distance. I use the light-to-fragment vector (in world space) to fetch my depth value in the cubemap. At this point, I don't know which of the six faces is being used, nor what 2D texture coordinates match the depth value I'm reading. Then how can I convert that depth value to a distance?
Here are snippets of my code to illustrate:
Depth texture:
glGenTextures(1, &TextureHandle);
glBindTexture(GL_TEXTURE_CUBE_MAP, TextureHandle);
for (int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT,
Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Framebuffers construction:
for (int i = 0; i < 6; ++i)
{
glGenFramebuffers(1, &FBO->FrameBufferID);
glBindFramebuffer(GL_FRAMEBUFFER, FBO->FrameBufferID);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, TextureHandle, 0);
glDrawBuffer(GL_NONE);
}
The piece of fragment shader I'm trying to write to achieve my code:
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
ShadowVec = DepthValueToDistance(ShadowVec);
if (ShadowVec * ShadowVec > dot(VertToLightWS, VertToLightWS))
return 1.0;
return 0.0;
}
The DepthValueToDistance function being my actual problem.
So, the solution was to convert the light-to-fragment vector to a depth value, instead of converting the depth read from the cubemap into a distance.
Here is the modified shader code:
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
if (ShadowVec + 0.0001 > VectorToDepthValue(VertToLightWS))
return 1.0;
return 0.0;
}
Explaination on VectorToDepthValue(vec3 Vec) :
LocalZComp corresponds to what would be the Z-component of the given Vec into the matching frustum of the cubemap. It's actually the largest component of Vec (for instance if Vec.y is the biggest component, we will look either on the Y+ or the Y- face of the cubemap).
If you look at this wikipedia article, you will understand the math just after (I kept it in a formal form for understanding), which simply convert the LocalZComp into a normalized Z value (between in [-1..1]) and then map it into [0..1] which is the actual range for depth buffer values. (assuming you didn't change it). n and f are the near and far values of the frustums used to generate the cubemap.
ComputeShadowFactor then just compare the depth value from the cubemap with the depth value computed from the fragment-to-light vector (named VertToLightWS here), also add a small depth bias (which was missing in the question), and returns 1 if the fragment is not occluded by the light.
I would like to add more details regarding the derivation.
Let V be the light-to-fragment direction vector.
As Benlitz already said, the Z value in the respective cube side frustum/"eye space" can be calculated by taking the max of the absolute values of V's components.
Z = max(abs(V.x),abs(V.y),abs(V.z))
Then, to be precise, we should negate Z because in OpenGL, the negative Z-axis points into the screen/view frustum.
Now we want to get the depth buffer "compatible" value of that -Z.
Looking at the OpenGL perspective matrix...
http://www.songho.ca/opengl/files/gl_projectionmatrix_eq16.png
http://i.stack.imgur.com/mN7ke.png (backup link)
...we see that, for any homogeneous vector multiplied with that matrix, the resulting z value is completely independent of the vector's x and y components.
So we can simply multiply this matrix with the homogeneous vector (0,0,-Z,1) and we get the vector (components):
x = 0
y = 0
z = (-Z * -(f+n) / (f-n)) + (-2*f*n / (f-n))
w = Z
Then we need to do the perspective divide, so we divide z by w (Z) which gives us:
z' = (f+n) / (f-n) - 2*f*n / (Z* (f-n))
This z' is in OpenGL's normalized device coordinate (NDC) range [-1,1] and needs to be transformed into a depth buffer compatible range of [0,1]:
z_depth_buffer_compatible = (z' + 1.0) * 0.5
Further notes:
It might make sense to upload the results of (f+n), (f-n) and (f*n) as shader uniforms to save computation.
V needs to be in world space since the shadow cube map is normally axis aligned in world space thus the "max(abs(V.x),abs(V.y),abs(V.z))"-part only works if V is a world space direction vector.

Texture wrong value in fragment shader

I'm loading a custom data into 2D texture GL_RGBA16F:
glActiveTexture(GL_TEXTURE0);
int Gx = 128;
int Gy = 128;
GLuint grammar;
glGenTextures(1, &grammar);
glBindTexture(GL_TEXTURE_2D, grammar);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA16F, Gx, Gy);
float* grammardata = new float[Gx * Gy * 4](); // set default to zero
*(grammardata) = 1;
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,Gx,Gy,GL_RGBA,GL_FLOAT,grammardata);
int grammarloc = glGetUniformLocation(p_myGLSL->getProgramID(), "grammar");
if (grammarloc < 0) {
printf("grammar missing!\n");
exit(0);
}
glUniform1i(grammarloc, 0);
When I read the value of uniform sampler2D grammar in GLSL, it returns 0.25 instead of 1. How do I fix the scaling problem?
if (texture(grammar, vec2(0,0) == 0.25) {
FragColor = vec4(0,1,0,1);
} else
{
FragColor = vec4(1,0,0,1);
}
By default texture interpolation is set to the following values:
GL_TEXTURE_MIN_FILTER = GL_NEAREST_MIPMAP_LINEAR,
GL_TEXTURE_MAG_FILTER = GL_LINEAR
GL_WRAP[R|S|T] = GL_REPEAT
This means, in cases where the mapping between texels of the texture and pixels on the screen does not fit, the hardware interpolates will interpolate for you. There can be two cases:
The texture is displayed smaller than it actually is: In this case interpolation is performed between two mipmap levels. If no mipmaps are generated, these are treated as beeing 0, which could lead to 0.25.
The texture is displayed larger than it actually is (and I think this will be the case here): Here, the hardware does not interpolate between mipmap levels, but between adjacent texels in the texture. The problem now comes from the fact, that (0,0) in texture coordinates is NOT the center of pixel [0,0], but the lower left corner of it.
Have a look at the following drawing, which illustrates how texture coordinates are defined (here with 4 texels)
tex-coord: 0 0.25 0.5 0.75 1
texels |-----0-----|-----1-----|-----2-----|-----3-----|
As you can see, 0 is on the boundary of a texel, while the first texels center is at (1/(2 * |texels|)).
This means for you, that with wrap mode set to GL_REPEAT, texture coordinate (0,0) will interpolate uniformly between the texels [0,0], [-1,0], [-1,-1], [0,-1]. Since -1 == 127 (due to repeat) and everything except [0,0] is 0, this results in
([0,0] + [-1,0] + [-1,-1] + [0,-1]) / 4 =
1 + 0 + 0 + 0 ) / 4 = 0.25